forked from goodboy/tractor
1
0
Fork 0

Compare commits

...

1248 Commits

Author SHA1 Message Date
goodboy e5ee2e3de8
Merge pull request #358 from goodboy/switch_to_pdbp
Switch to `pdbp` 🏄🏼
2023-05-15 09:58:58 -04:00
Tyler Goodlet 41aa91c8eb Add news file 2023-05-15 09:35:59 -04:00
Tyler Goodlet 6758e4487c Drop lingering `pdbpp` comment-refs in tests 2023-05-15 09:14:42 -04:00
Tyler Goodlet 1c3893a383 Drop commented `pdbpp` import logic 2023-05-15 09:01:55 -04:00
Tyler Goodlet 73befac9bc Switch to `pdbp` in test reqs 2023-05-15 09:01:27 -04:00
Tyler Goodlet 79622bbeea Restore `breakpoint()` hook after runtime exits
Previously we were leaking our (pdb++) override into the Python runtime
which would always result in a runtime error whenever `breakpoint()` is
called outside our runtime; after exit of the root actor . This
explicitly restores any previous hook override (detected during startup)
or deletes the hook and restores the environment if none existed prior.

Also adds a new WIP debugging example script to ensure breakpointing
works as normal after runtime close; this will be added to the test
suite.
2023-05-15 00:47:29 -04:00
Tyler Goodlet 95535b2226 Some more 3.10+ optional type sigs 2023-05-15 00:47:29 -04:00
Tyler Goodlet 87c6e09d6b Switch readme links to point @ `pdbp` B) 2023-05-14 22:52:24 -04:00
Tyler Goodlet 9ccd3a74b6 More detailed preface description 2023-05-14 22:38:47 -04:00
Tyler Goodlet ae4ff5dc8d pdbp: adding typing to config settings vars 2023-05-14 22:38:46 -04:00
Tyler Goodlet 705538398f `pdbp`: turn off line truncating by default, fixes terminal resizing stuff 2023-05-14 22:38:16 -04:00
Tyler Goodlet 86aef5238d Hide actor nursery exit frame 2023-05-14 21:24:26 -04:00
Tyler Goodlet cc82447db6 First try: switch debug machinery over to `pdbp` B) 2023-05-14 21:24:26 -04:00
Tyler Goodlet 23cffbd940 Use multiline import for debug mod 2023-05-14 21:24:26 -04:00
Tyler Goodlet 3d202272c4 Change over debugger tests to use `PROMPT` var.. 2023-05-14 21:24:26 -04:00
Tyler Goodlet 63cdb0891f Switch to `pdbp` since noone is maintaining `pdbpp` 2023-05-14 21:24:26 -04:00
goodboy 0f7db27b68
Merge pull request #356 from goodboy/drop_proc_actxmngr
`trio.Process.aclose()`?
2023-05-14 20:59:53 -04:00
Tyler Goodlet c53d62d2f7 Add news file 2023-05-14 20:31:26 -04:00
Tyler Goodlet f667d16d66 Copy the now deprecated `trio.Process.aclose()`
Move it into our `_spawn.do_hard_kill()` since we do indeed rely on
the particular process killing sequence on "soft kill" failure cases.
2023-05-14 19:31:50 -04:00
Tyler Goodlet 24a062341e Just call `trio.Process.aclose()` directly for now? 2023-04-02 14:34:41 -04:00
goodboy e714bec8db
Merge pull request #355 from kehrazy/patch-1
fixed the `Zombie` example having wrong indentation
2023-04-01 12:11:47 -04:00
Igor 009cd6552e
fixed the `Zombie` example having wrong indentation 2023-03-31 17:50:46 +03:00
goodboy 649c5e7504
Merge pull request #343 from goodboy/breceiver_internals
Avoid inf recursion in `BroadcastReceiver.receive()`
2023-01-30 14:01:13 -05:00
Tyler Goodlet 203f95615c Add nooz 2023-01-30 12:42:26 -05:00
Tyler Goodlet efb8bec828 Add a basic no-raise-on lag test 2023-01-30 12:26:07 -05:00
Tyler Goodlet 8637778739 Expose `raise_on_lag: bool` flag through factory 2023-01-30 12:18:23 -05:00
Tyler Goodlet 47166e45f0 Be explicit with passthrough kwargs (there's so few) 2023-01-29 17:31:21 -05:00
Tyler Goodlet 4ce2dcd12b Switch back to raising `Lagged` by default
Makes the broadcast test suite not hang xD, and is our expected default
behaviour. Also removes a ton of commented legacy cruft from before the
refactor to remove the `.receive()` recursion and fixes some typing.

Oh right, and in the case where there's only one subscriber left we warn
log about it since in theory we could actually entirely unwind the
bcaster back to the original underlying, though not sure if that's sane
or works for some use cases (like wanting to have some other subscriber
get added dynamically later).
2023-01-29 15:03:34 -05:00
Tyler Goodlet 80f983818f Ignore monkey patched `.send()` type annot 2023-01-29 15:03:34 -05:00
Tyler Goodlet 6ba29f8d56 Recurse and get the last value when in warn mode 2023-01-29 15:03:34 -05:00
Tyler Goodlet 2707a0e971 Add `._raise_on_lag` flag to disable `Lag` raising 2023-01-29 15:03:34 -05:00
Tyler Goodlet c8efcdd0d3 Drop `ReceiveMsgStream` from test suite 2023-01-29 15:03:34 -05:00
Tyler Goodlet 9f9907271b Merge `ReceiveMsgStream` and `MsgStream`
Since one-way streaming can be accomplished by just *not* sending on one
side (and/or thus wrapping such usage in a more restrictive API), we
just drop the recv-only parent type. The only method different was
`MsgStream.send()`, now merged in. Further in usage of `.subscribe()`
we monkey patch the underlying stream's `.send()` onto the delivered
broadcast receiver so that subscriber tasks can two-way stream as though
using the stream directly.

This allows us to more definitively drop `tractor.open_stream_from()` in
the longer run if we so choose as well; note currently this will
potentially create an issue if a caller tries to `.send()` on such a one
way stream.
2023-01-29 15:03:34 -05:00
Tyler Goodlet c2367c1c5e Better `trio`-ize `BroadcastReceiver` internals
Driven by a bug found in `piker` where we'd get an inf recursion error
due to `BroadcastReceiver.receive()` being called when consumer tasks
are awoken but no value is ready to `.nowait_receive()`.

This new rework takes an approach closer to the interface and internals
of `trio.MemoryReceiveChannel` particularly in terms of,

- implementing a `BroadcastReceiver.receive_nowait()` and using it
  within the async `.receive()`.
- failing over to an internal `._receive_from_underlying()` when the
  `_nowait()` call raises `trio.WouldBlock`.
- adding `BroadcastState.statistics()` for debugging and testing
  dropping recursion from `.receive()`.
2023-01-29 15:03:34 -05:00
goodboy a777217674
Merge pull request #346 from goodboy/ipc_failure_while_streaming
Ipc failure while streaming
2023-01-29 15:02:54 -05:00
Tyler Goodlet 13c9eadc8f Move result log msg up and drop else block 2023-01-29 14:55:02 -05:00
Tyler Goodlet af6c325072 Bump up legacy streaming timeout a smidgen 2023-01-29 14:55:02 -05:00
Tyler Goodlet 195d2f0ed4 Add nooz 2023-01-29 14:55:02 -05:00
Tyler Goodlet aa4871b13d Call `MsgStream.aclose()` in `Context.open_stream.__aexit__()`
We weren't doing this originally I *think* just because of the path
dependent nature of the way the code was developed (originally being
mega pedantic about one-way vs. bidirectional streams) but, it doesn't
seem like there's any issue just calling the stream's `.aclose()`; also
have the benefit of just being less code and logic checks B)
2023-01-29 14:55:02 -05:00
Tyler Goodlet 556f4626db Tweak warning msg for still-alive-after-cancelled actor 2023-01-29 14:55:02 -05:00
Tyler Goodlet 3967c0ed9e Add a simplified zombie lord specific process reaping test 2023-01-29 14:55:02 -05:00
Tyler Goodlet e34823aab4 Add parent vs. child cancels first cases 2023-01-29 14:55:02 -05:00
Tyler Goodlet 6c35ba2cb6 Add IPC breakage on both parent and child side
With the new fancy `_pytest.pathlib.import_path()` we can do real
parametrization of the example-script-module code and thus configure
whether the child, parent, or both silently break the IPC connection.

Parametrize the test for all the above mentioned cases as well as the
case where the IPC never breaks but we still simulate the user hammering
ctl-c / SIGINT to terminate the actor tree. Adjust expected errors based
on each case and heavily document each of these.
2023-01-29 14:55:02 -05:00
Tyler Goodlet 3a0817ff55 Skip `advanced_faults/` subset in docs examples tests 2023-01-29 14:55:02 -05:00
Tyler Goodlet 7fddb4416b Handle `mp` spawn method cases in test suite 2023-01-29 14:55:02 -05:00
Tyler Goodlet 1d92f2552a Adjust other examples tests to expect `pathlib` objects 2023-01-29 14:55:02 -05:00
Tyler Goodlet 4f8586a928 Wrap ex in new test, change dir helpers to use `pathlib.Path` 2023-01-29 14:55:02 -05:00
Tyler Goodlet fb9ff45745 Move example to a new `advanced_faults` egs subset dir 2023-01-29 14:55:02 -05:00
Tyler Goodlet 36a83cb306 Refine example to drop IPC mid-stream
Use a task nursery in the subactor to spawn tasks which cancel the IPC
channel mid stream to simulate the most concurrent case we're likely to
see. Make `main()` accept a `debug_mode: bool` for parametrization. Fill
out detailed comments/docs on this example.
2023-01-29 14:55:02 -05:00
Tyler Goodlet 7394a187e0 Name one-way streaming (con generators) what it is 2023-01-29 14:55:02 -05:00
Tyler Goodlet df01294bb2 Show more functiony syntax in ctx-cancelled log msgs 2023-01-29 14:55:02 -05:00
Tyler Goodlet ddf3d0d1b3 Show tracebacks for un-shipped/propagated errors 2023-01-29 14:55:02 -05:00
Tyler Goodlet 158569adae Add WIP example of silent IPC breaks while streaming 2023-01-29 14:55:02 -05:00
Tyler Goodlet 97d5f7233b Fix uid2nursery lookup table type annot 2023-01-29 14:55:02 -05:00
Tyler Goodlet d27c081a15 Ensure arbiter sockaddr type before usage 2023-01-29 14:55:02 -05:00
Tyler Goodlet a4874a3227 Always set the `parent_exit: trio.Event` on exit 2023-01-29 14:55:02 -05:00
Tyler Goodlet de04bbb2bb Don't raise on a broken IPC-context when sending stop msg 2023-01-29 14:55:02 -05:00
Tyler Goodlet 4f977189c0 Handle broken mem chan on `Actor._push_result()`
When backpressure is used and a feeder mem chan breaks during msg
delivery (usually because the IPC allocating task already terminated)
instead of raising we simply warn as we do for the non-backpressure
case.

Also, add a proper `Actor.is_arbiter` test inside `._invoke()` to avoid
doing an arbiter-registry lookup if the current actor **is** the
registrar.
2023-01-29 14:55:02 -05:00
goodboy 9fd62cf71f
Merge pull request #348 from goodboy/deprecate_arbiter_addr
Begin deprecation of `arbiter_addr` -> `registry_addr`
2023-01-26 16:05:41 -05:00
Tyler Goodlet 606efa5bb7 Adjust daemon command to use new `registry_addr` 2023-01-26 16:00:08 -05:00
Tyler Goodlet 121a8cc891 Drop `Optional` usage from root mod 2023-01-26 16:00:08 -05:00
Tyler Goodlet c54b8ca4ba Begin deprecation of `arbiter_addr` -> `registry_addr` 2023-01-26 16:00:08 -05:00
goodboy de93c8257c
Merge pull request #349 from goodboy/prompt_on_ctrlc
Re-draw `pdbpp` prompt on `SIGINT`
2023-01-26 15:56:37 -05:00
Tyler Goodlet 5b8a87d0f6 Slightly better `xonsh` check hack, fix typing 2023-01-26 15:48:15 -05:00
Tyler Goodlet 9e5c8ce6f6 Add nooz file 2023-01-26 15:39:03 -05:00
Tyler Goodlet 965cd406a2 Use std `pdbpp` release 2023-01-26 15:27:55 -05:00
Tyler Goodlet 2e278ceb74 Add a super hacky check for `xonsh`, smh.. 2023-01-26 15:26:43 -05:00
Tyler Goodlet 6d124db7c9 Never run ctlc-with-intermediary-actor cases locally either 2023-01-26 12:44:13 -05:00
Tyler Goodlet dba8118553 Always attempt prompt redraw on ctl-c in REPL
The stdlib has all sorts of muckery with ignoring SIGINT in the
`Pdb._cmdloop()` but here we just override all that since we don't trust
their decisions about cancellation handling whatsoever. Adds
a `Lock.repl: MultiActorPdb` attr which is set by any task which
acquires root TTY lock indicating (via actor global state) that the
current actor is using the debugger REPL and can be expected to re-draw
the prompt on SIGINT. Further we mask out log messages from any actor
who also has the `shield_sigint_handler()` enabled to avoid logging
noise when debugging.
2023-01-26 12:44:13 -05:00
Tyler Goodlet fca2e7c10e Simplify closed abruptly log msg 2023-01-26 12:44:13 -05:00
Tyler Goodlet 5ed62c5c54 Add note about intermediary-actor in debug issue 2023-01-26 12:44:13 -05:00
goodboy 588b7ca7bf
Merge pull request #344 from goodboy/harden_cluster_tests
Harden cluster tests
2022-12-12 15:02:23 -05:00
Tyler Goodlet d8214735b9 Add bugfix nooz 2022-12-12 14:53:59 -05:00
Tyler Goodlet 48f6d514ef Handle earlier name error crash in debug test 2022-12-12 14:05:32 -05:00
Tyler Goodlet 6c8cacc9d1 Adjust all default is `None` annots (per new `mypy`) 2022-12-12 13:18:22 -05:00
Tyler Goodlet 38326e8c15 Avoid error on context double pops 2022-12-11 23:46:33 -05:00
Tyler Goodlet b5192cca8e Always greedily `list`-cast`mngrs` input sequence 2022-12-11 23:20:58 -05:00
Tyler Goodlet c606be8c64 Passthrough runtime kwargs from `open_actor_cluster()` 2022-12-11 19:56:08 -05:00
Tyler Goodlet d8e48e29ba Add `mngrs=(<gen_comprehension>)` test 2022-12-11 19:56:01 -05:00
goodboy a0f6668ce8
Merge pull request #333 from goodboy/exceptiongroups
`ExceptiongGroup`s and `trio>=0.22`
2022-10-14 20:11:26 -04:00
Tyler Goodlet 274c66cf9d Add nooz 2022-10-14 19:42:23 -04:00
Tyler Goodlet f2641c8964 Avoid "task never called `.started()`" runtime erros when cancelling 2022-10-14 19:42:23 -04:00
Tyler Goodlet c47575997a Expand nested case to include error prop and breakpointing 2022-10-14 19:42:23 -04:00
Tyler Goodlet f39414ce12 Drop error-repacking for `.run_in_actor()`s block
If we pack the nursery parent task's error into the `errors` table
directly in the handler, we don't need to specially handle packing that
same error into any exception group raised while handling sub-actor
cancellation; drops some ugly indentation ;)
2022-10-14 19:42:23 -04:00
Tyler Goodlet 0a1bf8e57d Tolerate eg in runtime test teardown 2022-10-14 19:42:23 -04:00
Tyler Goodlet e298b70edf Drop added `.pdp()` level msgs used duringn dev 2022-10-14 19:42:23 -04:00
Tyler Goodlet c0dd5d7ffc Adjust multi-daemon test to be more deterministic 2022-10-14 19:42:23 -04:00
Tyler Goodlet 347591c348 Expect egs in tests which retreive portal results 2022-10-14 19:42:23 -04:00
Tyler Goodlet 38f9d35dee Fix errors table type annot 2022-10-14 19:42:23 -04:00
Tyler Goodlet 88448f7281 Fix handler type annot 2022-10-14 19:42:23 -04:00
Tyler Goodlet 0956d5f461 Restore the `trio` SIGINT handler, cancel root lock tasks on no-peers
Pretty sure this is the final touch to alleviate all our debug lock
headaches! Instead of trying to revert to the "last" handler (as `pdb`
does internally in the stdlib) we always just revert to the handler
`trio` registers during startup. Further this seems to allow cancelling
the root-side locking task if it's detected as stale IFF we only do this
when the root actor is in a "no more IPC peers" state.

Deatz:
- (always) set `._debug.Lock._trio_handler` as the `trio` version, not
  some last used handler to make sure we're getting the ctrl-c handling
  we want when not in debug mode.
- assign the trio handler in `open_root_actor()`
  `._runtime._async_main()` to be sure it's applied in subactors as well
  as the root.
- only do debug lock blocking and root-side-locking-task cancels when
  a "no peers" condition is detected in the root actor: i.e. no IPC
  channels are detected by the root meaning it's impossible any actor
  has a sane lock-state ongoing for debug mode.
2022-10-14 18:18:01 -04:00
Tyler Goodlet c646c79a82 Adjust root-errors debug tests for blocking and egs 2022-10-14 18:18:01 -04:00
Tyler Goodlet 33f2234baf Hide some stack layers the user doesn't really need to see 2022-10-14 18:18:01 -04:00
Tyler Goodlet 7521bded3d Pack error from the parent task into the actor nursery 2022-10-14 18:16:51 -04:00
Tyler Goodlet 0f523b65fb Change cancel test over the exception group 2022-10-14 18:16:51 -04:00
Tyler Goodlet 50fe098e06 First pass, swap `MultiError` for `BaseExceptionGroup` 2022-10-14 18:16:51 -04:00
Tyler Goodlet d87d6af7e1 Add `exceptiongroup` (3.11 backport lib) as dep 2022-10-14 18:16:51 -04:00
Tyler Goodlet df69aedcd5 Pin to latest `trio` version 2022-10-14 18:16:51 -04:00
Tyler Goodlet b15e4ed9ce Adjust "no arbiter" test for new runtime defaults
Turns out this test was being silently ignored due to incorrect usage of
sync opening of our `.open_nursery()` block (with a `with` not `async
with`) and thus was an noop XD

Instead this fixes the test to call a `tractor` discovery built-in
without starting the runtime (which is now done implicitly when a user
opens a nursery) which should result in the prior expected outcome,
a `RuntimeError`.
2022-10-12 12:46:20 -04:00
Tyler Goodlet 98056f6ed7 Move logging context map into `log.py` module 2022-10-12 12:46:20 -04:00
goodboy 247d3448ae
Merge pull request #337 from goodboy/debug_lock_blocking
Debug lock blocking
2022-10-12 12:41:14 -04:00
Tyler Goodlet fc17f6790e Bump `towncrier` alpha version 2022-10-12 12:36:09 -04:00
Tyler Goodlet b81b6be98a Drop extra log msgs, some old commented code 2022-10-12 12:35:35 -04:00
Tyler Goodlet 72fbda4cef Add nooz file 2022-10-12 12:35:11 -04:00
Tyler Goodlet fb721f36ef Support debug-lock blocking, use on no-more IPC
This is a lingering debugger locking race case we needed to handle:

- child crashes acquires TTY lock in root and attaches to `pdb`
- child IPC goes down such that all channels to the root are broken
  / non-functional.
- root is stuck thinking the child is still in debug even though it
  can't be contacted and the child actor machinery hasn't been
  cancelled by its parent.
- root get's stuck in deadlock with child since it won't send a cancel
  request until the child is finished debugging, but the child can't
  unlock the debugger bc IPC is down.

To avoid this scenario add debug lock blocking list via
`._debug.Lock._blocked: set[tuple]` which holds actor uids for any actor
that is detected by the root as having no transport channel connections
with said root (of which at least one should exist if this sub-actor at
some point acquired the debug lock). The root consequently checks this
list for any actor that tries to (re)acquire the lock and blocks with
a `ContextCancelled`. When a debug condition is tested in
`._runtime._invoke` the context's `._enter_debugger_on_cancel` which
is set to `False` if the actor is on the block list in which case the
post-mortem entry is skipped.

Further this adds a root-locking-task side cancel scope to
`Lock._root_local_task_cs_in_debug` which can be cancelled by the root
runtime when a stale lock is detected after all IPC channels for the
actor have been torn down. NOTE: right now we're NOT doing this since it
seems to cause test failures likely due because it may cause pre-mature
cancellation and maybe needs a bit more experimenting?
2022-10-11 20:00:05 -04:00
Tyler Goodlet 734d8dd663 Move `trio` scope outside first inter-task-chan receive 2022-10-11 20:00:05 -04:00
Tyler Goodlet 30ea7a06b0 Avoid inf nursery hang by reversing `async with` ordering 2022-10-11 20:00:05 -04:00
Tyler Goodlet 3398153c52 Add timeout around `trio`-callee-task 2022-10-11 20:00:05 -04:00
Tyler Goodlet 1c480e6c92 Add `Context` cancel message and debug toggle flag
In the case of a callee-side context cancelling itself it can be handy
to let the caller-side task know (even if through logging) that the
cancel was due to some known reason. Make `.cancel()` accept such
a message on the callee side and have it included in the
`._runtime._invoke()` raised `ContextCancelled` emission.

Also add a `Context._trigger_debugger_on_cancel: bool` flag which can be
set to `False` to avoid the debugger post-mortem crash mode from
engaging on cross-context tasks which cancel themselves for a known
reason (as is needed for blocked tasks in the debug TTY-lock machinery).
2022-10-11 20:00:05 -04:00
goodboy dfdad4d1fa
Merge pull request #336 from goodboy/callable_key_maybe_open_context
Callable key input to maybe open context
2022-10-10 00:32:27 -04:00
Tyler Goodlet b892bc74f6 Add trivial news snippet 2022-10-09 21:27:23 -04:00
Tyler Goodlet 44b59f3338 Go back to a `global` single-ton nursery per actor
Turns out the lifetime mgmt of separate nurseries per delegate manager
is tricky; a new nursery can't be naively allocated on cache-misses since
it may get closed by some early terminating task instead of by the "last
using" consumer task. In theory if we allocate using the same logic as
that used for the last-task-triggers-exit then this should work?

For now just go back to a single global nursery per `_Cache` which still
avoids use of the internal actor service nursery.
2022-10-09 21:27:23 -04:00
Tyler Goodlet 7a719ac2a7 Use one nursery per unique manager (signature)
Instead of sticking all `trionics.maybe_open_context()` tasks inside the
actor's (root) service nursery, open a unique one per manager function
instance (id).

Further, accept a callable for the `key` such that a user can have
more flexible control on the caching logic and move the
`maybe_open_nursery()` helper out of the portal mod and into this
trionics "managers" module.
2022-10-09 21:27:23 -04:00
goodboy 9e6266dda3
Merge pull request #335 from goodboy/spawn_backend_table
Spawn backend table
2022-10-09 21:26:28 -04:00
Tyler Goodlet b1abec543f Add trivial news snippet 2022-10-09 18:51:31 -04:00
Tyler Goodlet 93b9d2dc2d Drop dynamic backend-spawn-method test generation 2022-10-09 18:29:50 -04:00
Tyler Goodlet 4d808757a6 Fix start method name in logging propagation test 2022-10-09 18:22:55 -04:00
Tyler Goodlet 7e5bb0437e Go to latest `mypy` version in CI 2022-10-09 18:13:45 -04:00
Tyler Goodlet b19f08d9f0 Fill out new backend names in ci script 2022-10-09 18:08:07 -04:00
Tyler Goodlet 2c20b2d64f Fix import to load from `conftest.py` 2022-10-09 18:03:17 -04:00
Tyler Goodlet 023b6fc845 Drop `tractor.testing` sub-package 2022-10-09 17:57:02 -04:00
Tyler Goodlet d24fae8381 'Rename mp spawn methods to have a `'mp_'` prefix' 2022-10-09 17:54:55 -04:00
Tyler Goodlet 5ab98513b7 Move `@tractor_test` into `conftest.py` 2022-10-09 17:14:20 -04:00
Tyler Goodlet 90f4912580 Organize process spawning into lookup table
Instead of the logic branching create a table `._spawn._methods`
which is used to lookup the desired backend framework (in this case
still only one of `multiprocessing` or `trio`) and make the top level
`.new_proc()` do the lookup and any common logic. Use a `typing.Literal`
to define the lookup table's key set.

Repair and ignore a bunch of type-annot related stuff todo with `mypy`
updates and backend-specific process typing.
2022-10-09 16:51:21 -04:00
goodboy 6e24e16068
Merge pull request #334 from goodboy/pin_pre_trio_0.22
Pin pre-0.22 bc exception groups break everything
2022-10-09 16:26:56 -04:00
Tyler Goodlet 15047341bd Ignore forserver override attrs with `mypy` 2022-10-09 16:14:11 -04:00
Tyler Goodlet dc295ab227 Pin pre-0.22 bc exception groups break everything 2022-10-09 16:11:06 -04:00
goodboy 6a0337b69d
Merge pull request #326 from goodboy/lifetime_stack_tests
Expose lifetime stack as class attr, add base test suite
2022-09-16 18:09:24 -04:00
Tyler Goodlet e609183242 Expose lifetime stack as class attr, add base test suite 2022-09-15 23:50:15 -04:00
goodboy 368e9f3f7c
Merge pull request #322 from goodboy/we_bein_all_matchy
3.10 and friends
2022-09-15 23:49:34 -04:00
Tyler Goodlet 10eeda2d2b Use built-ins for all data-structure-type annotations 2022-09-15 23:41:28 -04:00
Tyler Goodlet a113e22bb9 Add trivial nooz snippet 2022-09-15 23:41:28 -04:00
Tyler Goodlet ad19bf2cf1 Remove `tractor.run()` once and for all
It's been deprecated for a while now and all docs and tests have been
changed.

Closes #183
2022-09-15 23:41:28 -04:00
Tyler Goodlet 9aef03772a Expose `Actor` at pkg level, adjust debug type annots 2022-09-15 23:41:28 -04:00
Tyler Goodlet 7548dba8f2 Change to new doc string style 2022-09-15 23:41:28 -04:00
Tyler Goodlet ba4d4e9af3 Change test import 2022-09-15 23:41:28 -04:00
Tyler Goodlet 208d56af2c Make `async_main()` a module func 2022-09-15 23:41:28 -04:00
Tyler Goodlet a3a5bc267e Make `process_messages()` a mod func 2022-09-15 23:41:28 -04:00
Tyler Goodlet d4084b2032 Rename our core module to `_runtime` 2022-09-15 23:41:28 -04:00
Tyler Goodlet 1e6b4d5dd4 Drop `msgspec` min pin 2022-09-15 23:41:28 -04:00
Tyler Goodlet c613acfe5c Start alpha 6 dev, ensure py3.10+ 2022-09-15 23:41:28 -04:00
goodboy fea9dc7065
Merge pull request #324 from goodboy/debug_event_guard
Add debug complete event `None`-guard for when already reset
2022-09-15 23:20:38 -04:00
goodboy e558c427de
Merge pull request #327 from goodboy/disable_win_ci
Disable win tests in CI
2022-09-15 23:20:26 -04:00
Tyler Goodlet f07c3aa4a1 Add nooz 2022-09-15 19:39:34 -04:00
Tyler Goodlet bafd10a260 Make `maybe_open_context()` re-entrant safe, use per factory locks 2022-09-15 19:02:02 -04:00
Tyler Goodlet 5ad540c417 Add debug complete event `None`-guard for when already reset 2022-09-15 19:02:02 -04:00
Tyler Goodlet 83b44cf469 Flip over PR number in readme 2022-09-15 18:54:51 -04:00
Tyler Goodlet 1f2001020e Mention disabled windows CI in readme 2022-09-15 18:46:34 -04:00
Tyler Goodlet 71f9881a60 Drop windows from CI until we get a collab that actually uses it XD 2022-09-15 18:36:45 -04:00
Tyler Goodlet e24645eec8 Drop `pytest` 3.10 issue comment, add todo for `pyreadline3` 2022-09-15 18:36:37 -04:00
Tyler Goodlet c3cdeeb3ba Drop `pytest` full trace flag, use `pip list` 2022-09-15 18:36:27 -04:00
Tyler Goodlet 9bd534df83 Drop 3.9 from CI jobs 2022-09-15 18:36:15 -04:00
goodboy c1d700f257
Merge pull request #321 from goodboy/alpha5
`alpha5` release!
2022-08-03 14:36:52 -04:00
Tyler Goodlet 14c6e34658 Add summary section 2022-08-03 11:42:53 -04:00
Tyler Goodlet 3393bc23e4 Generate release news 2022-08-03 11:41:23 -04:00
Tyler Goodlet 171f1bc243 Move to using `pyproject.toml` for `towncrier`
Add explicit fragment types based on `pytest`'s config
and don't manually spec the version.
2022-08-03 11:36:23 -04:00
Tyler Goodlet ee02cd2496 Move misplaced fragment for #305 2022-08-03 10:54:22 -04:00
Tyler Goodlet 4c5d435aac Fix towncrier bug entry suffix 2022-08-03 10:21:37 -04:00
Tyler Goodlet a9b4a61620 Flip to non-dev version tag 2022-08-03 10:21:07 -04:00
goodboy 641ed7a32a
Merge pull request #165 from goodboy/signint_saviour
Ignore SIGINT when in a debugger REPL
2022-08-03 09:26:54 -04:00
Tyler Goodlet cc5f60bba0 List deps in CI 2022-08-02 18:19:03 -04:00
Tyler Goodlet 8f1fe2376a Simplify all hooks to a common `Lock.release()` 2022-08-02 18:14:05 -04:00
Tyler Goodlet 65540f3e2a Add nooz 2022-08-02 15:29:33 -04:00
Tyler Goodlet 650313dfef Drop legacy handler blocks factored into `_acquire_debug_lock()` 2022-08-02 12:50:27 -04:00
Tyler Goodlet e4006da6f4 Drop `pdbpp` bug notes, add follow up issue #320 note 2022-08-02 12:48:40 -04:00
Tyler Goodlet 7f6169a050 Drop legacy commented/todo remote debug helper block 2022-08-02 12:43:14 -04:00
Tyler Goodlet 2d387f2610 Add in issue link for nested cases 2022-08-02 12:17:34 -04:00
Tyler Goodlet 8115759984 Mark final nested-actor debugger test 2022-08-02 12:17:34 -04:00
Tyler Goodlet 02c3b9a672 Put `pygments` back to default 2022-08-02 12:17:34 -04:00
Tyler Goodlet fa4388835c Add an expect wrapper, use in hanging CI test 2022-08-02 12:17:34 -04:00
Tyler Goodlet 54de72d8df Loosen timeout on nested child re-locking 2022-08-02 12:17:34 -04:00
Tyler Goodlet c5c7a9027c Line len lint and drop rpc log msg level again 2022-08-02 12:17:34 -04:00
Tyler Goodlet e4771eec16 Go back to skipping since xfail is wack 2022-08-02 12:17:28 -04:00
Tyler Goodlet a9aaee9dbd Use xfails for nested cases, revert prompt expect 2022-08-02 12:17:28 -04:00
Tyler Goodlet acfbae4b95 Drop verbose level, report xfails 2022-08-02 12:17:28 -04:00
Tyler Goodlet aca9a6b99a Try just skipping nested actor tests in CI 2022-08-02 12:17:28 -04:00
Tyler Goodlet 8896ba2bf8 Use `assert_before` more extensively 2022-08-02 12:17:28 -04:00
Tyler Goodlet 87b2ccb86a Try less times for EOF 2022-08-02 12:17:28 -04:00
Tyler Goodlet 937ed99e39 Factor sigint overriding into lock methods 2022-08-02 12:17:28 -04:00
Tyler Goodlet 91f034a136 Move all module vars into a `Lock` type 2022-08-02 12:17:28 -04:00
Tyler Goodlet 08cf03cd9e Handle missing prompt render case? 2022-08-02 12:17:28 -04:00
Tyler Goodlet 5e23b3ca0d Drop pytest full-tracing in CI again 2022-08-02 12:17:28 -04:00
Tyler Goodlet 6f01c78122 Disable `pygments` highlighting on ctlc tests 2022-08-02 12:17:28 -04:00
Tyler Goodlet 457499bc2e Avoid infinite wait for EOF 2022-08-02 12:17:28 -04:00
Tyler Goodlet a4bac135d9 Use `pytest-timeout` plug to try and prevent CI hang 2022-08-02 12:17:28 -04:00
Tyler Goodlet 20c660faa7 Add timeout on spawn error msg check 2022-08-02 12:17:28 -04:00
Tyler Goodlet 1d4d55f5cd Increase verbosity in ci tests for now 2022-08-02 12:17:28 -04:00
Tyler Goodlet c0cd99e374 Timeout on arbiter ping, avoid TCP SYN hangs in CI? 2022-08-02 12:17:28 -04:00
Tyler Goodlet a4538a3d84 Drop ctlc tests on Py3.9...
After many tries I just don't think it's worth it to make the tests work
since the repl UX in `pdbpp` is so unreliable in the latest release and
honestly we're trying to go 3.10+ ASAP.

Further,
- entirely drop the pattern matching inside the `do_ctlc()` for now.
- add a `subactor_error` parametrization that catches a case that
  previously caused a hang (when you use 'next' immediately after the
  first crash/debug lock (the fix was pushed just before this commit).
2022-08-02 12:17:28 -04:00
Tyler Goodlet b01daa5319 Factor lock-state release logic into helper
The common logic to both remove our custom SIGINT handler as well
as signal the actor global event that pdb is complete. Call this
whenever we exit a post mortem call and thus any time some rpc task
get's debugged inside `._actor._invoke()`.

Further, we have to manually print the REPL prompt on 3.9 for some wack
reason, so stick a version guard in the sigint handler for that..
2022-08-02 12:17:28 -04:00
Tyler Goodlet bd362a05f0 Run release hook around `next` repl commands as well 2022-08-02 12:17:28 -04:00
Tyler Goodlet cb0c47c42a Try disabling prompt expect in ctrlc cases 2022-08-02 12:17:28 -04:00
Tyler Goodlet 808d7ae2c6 Add timeout guard around caller side context open 2022-08-02 12:17:28 -04:00
Tyler Goodlet b21f2e16ad Always consider the debugger when exiting contexts
When in an uncertain teardown state and in debug mode a context can be
popped from actor runtime before a child finished debugging (the case
when the parent is tearing down but the child hasn't closed/completed
its tty lock IPC exit phase) and the child sends the "stop" message to
unlock the debugger but it's ignored bc the parent has already dropped
the ctx. Instead we call `._debug.maybe_wait_for_deugger()` before these
context removals to avoid the root getting stuck thinking the lock was
never released.

Further, add special `Actor._cancel_task()` handling code inside
`_invoke()` which continues to execute the method despite the IPC
channel to the caller being broken and thus avoiding potential hangs due
to a target (child) actor task remaining alive.
2022-08-02 12:17:28 -04:00
Tyler Goodlet 4779badd96 Add before assert helper and print console bytes on fail 2022-08-02 12:17:28 -04:00
Tyler Goodlet 6bdcbdb96f Do child decode on `do_ctlc` exit? 2022-08-02 12:17:28 -04:00
Tyler Goodlet adbebd3f06 Add ctl-c to remaining tests, only expect prompt in non-CI 2022-08-02 12:17:28 -04:00
Tyler Goodlet a2e90194bc Add ctl-c case to `subactor_breakpoint` example test 2022-08-02 12:17:28 -04:00
Tyler Goodlet ba7b355d9c Add note about default behaviour of `fancycompleter` 2022-08-02 12:17:28 -04:00
Tyler Goodlet 617d57dc35 Disable ctl-c prompt checks again 2022-08-02 12:17:28 -04:00
Tyler Goodlet dadd5e6148 Add back prompt expect via flag 2022-08-02 12:17:28 -04:00
Tyler Goodlet a72350118c Test: drop expect prompt 2022-08-02 12:17:28 -04:00
Tyler Goodlet ef8dc0204c Just drop all longlisting for now and leave comments 2022-08-02 12:17:28 -04:00
Tyler Goodlet a101971027 Go back to original longlist code 2022-08-02 12:17:28 -04:00
Tyler Goodlet 835836123b Just don't call longlist on 3.10+ for now 2022-08-02 12:17:28 -04:00
Tyler Goodlet 70ad0f6b8e Add longer delays around ctl-c loop, don't expect longlist 2022-08-02 12:17:28 -04:00
Tyler Goodlet 56b30a9a53 Add sleep around ctl-c iteration loop 2022-08-02 12:17:27 -04:00
Tyler Goodlet 925d5c1ceb Pin to specific `pdbppp` master commit 2022-08-02 12:17:27 -04:00
Tyler Goodlet b9eb601265 General typing fixes for `mypy` 2022-08-02 12:17:27 -04:00
Tyler Goodlet 4dcc21234e Only call `.poll()` if a method on the spawn backend 2022-08-02 12:17:27 -04:00
Tyler Goodlet 64909e676e Fix loglevel in subactor test; actually pass the level XD 2022-08-02 12:17:27 -04:00
Tyler Goodlet 19fb77f698 Pin to `trio >= 0.20` 2022-08-02 12:17:27 -04:00
Tyler Goodlet 8b9f342eef Port to new `.lowlevel.open_process()` API 2022-08-02 12:17:27 -04:00
Tyler Goodlet bd7d507153 Guard against `asyncio` cancelled logged to console 2022-08-02 12:17:16 -04:00
Tyler Goodlet 9bc38cbf04 Add slight delay 2nd ctlc round.. 2022-08-02 12:17:06 -04:00
Tyler Goodlet a90ca4b384 Call longlist normally when on py < 3.10 2022-08-02 12:17:06 -04:00
Tyler Goodlet d0dcd55f47 Only report disconnected actors if proc is still alive? 2022-08-02 12:17:06 -04:00
Tyler Goodlet 4e08605b0d Only do `pdbpp` from `git` install on 3.10+ 2022-08-02 12:17:06 -04:00
Tyler Goodlet 519f4c300b I dunno, seems like `breakpoint()` needs this? 2022-08-02 12:17:06 -04:00
Tyler Goodlet 56c19093bb Add basic module-not-found when opening a ctx eg. 2022-08-02 12:17:06 -04:00
Tyler Goodlet ff3f5959e9 Always enable debug level logging if mode enabled 2022-08-02 12:16:58 -04:00
Tyler Goodlet abb00531d3 Add help msg for non `__main__` modules as well 2022-08-02 12:16:58 -04:00
Tyler Goodlet 439d320a25 Add basic ctl-c testing cases to suite 2022-08-02 12:16:58 -04:00
Tyler Goodlet 18c525d2f1 Hack around double long list print issue..
See https://github.com/pdbpp/pdbpp/issues/496
2022-08-02 12:16:58 -04:00
Tyler Goodlet 201c026284 Show full KBI trace for help with CI hangs 2022-08-02 12:16:58 -04:00
Tyler Goodlet 2a61aa099b Move pydantic-click hang example to new dir, skip in test suite 2022-08-02 12:16:58 -04:00
Tyler Goodlet e2453fd3da Add spaces before values in log msg 2022-08-02 12:16:58 -04:00
Tyler Goodlet b29def8b5d Add runtime level msg around channel draining 2022-08-02 12:16:58 -04:00
Tyler Goodlet f07e9dbb2f Always undo SIGINT overrides, cancel detached children
Ensure that even when `pdb` resumption methods are called during a crash
where `trio`'s runtime has already terminated (eg. `Event.set()` will
raise) we always revert our sigint handler to the original. Further
inside the handler if we hit a case where a child is in debug and
(thinks it) has the global pdb lock, if it has no IPC connection to
a parent, simply presume tty sync-coordination is now lost and cancel
the child immediately.
2022-08-02 12:16:49 -04:00
Tyler Goodlet 2f5a6049a4 Readme formatting tweaks 2022-07-27 11:40:02 -04:00
Tyler Goodlet 418e74eee7 Pin to `pdbpp` upstream master, 3.10 problem?
See issues:
- https://github.com/pdbpp/pdbpp/issues/480
- https://github.com/pdbpp/pdbpp/pull/482
2022-07-27 11:40:02 -04:00
Tyler Goodlet c7035be2fc Tolerate double `.remove()`s of stream on portal teardowns 2022-07-27 11:40:02 -04:00
Tyler Goodlet deaca7d6cc Always propagate SIGINT when no locking peer found
A hopefully significant fix here is to always avoid suppressing a SIGINT
when the root actor can not detect an active IPC connections (via
a connected channel) to the supposed debug lock holding actor. In that
case it is most likely that the actor has either terminated or has lost
its connection for debugger control and there is no way the root can
verify the lock is in use; thus we choose to allow KBI cancellation.

Drop the (by comment) `try`-`finally` block in
`_hijoack_stdin_for_child()` around the `_acquire_debug_lock()` call
since all that logic should now be handled internal to that locking
manager. Try to catch a weird error around the `.do_longlist()` method
call that seems to sometimes break on py3.10 and latest `pdbpp`.
2022-07-27 11:40:02 -04:00
Tyler Goodlet d47d0e7c37 Always call pdb hook even if tty locking fails 2022-07-27 11:40:02 -04:00
Tyler Goodlet 0062c96a3c Log cancels with appropriate level 2022-07-27 11:40:02 -04:00
Tyler Goodlet 4be13b7387 Just warn on IPC breaks 2022-07-27 11:40:02 -04:00
Tyler Goodlet 7bb5addd4c Only warn on `trio.BrokenResourceError`s from `_invoke()` 2022-07-27 11:40:02 -04:00
Tyler Goodlet 4fd924cfd2 Make example a subpkg for `python -m <mod>` testing 2022-07-27 11:40:02 -04:00
Tyler Goodlet fe0fd1a1c1 Add example that triggers bug #302 2022-07-27 11:40:02 -04:00
Tyler Goodlet dd23e78de1 Add back in async gen loop 2022-07-27 11:40:02 -04:00
Tyler Goodlet 89b44f8163 Pre-declare disconnected flag 2022-07-27 11:40:02 -04:00
Tyler Goodlet 2819b6a5b2 Avoid attr error XD 2022-07-27 11:40:02 -04:00
Tyler Goodlet f2671ed026 Type annot updates 2022-07-27 11:40:02 -04:00
Tyler Goodlet 41924c86a6 Drop uneeded backframe traceback hide annotation 2022-07-27 11:40:02 -04:00
Tyler Goodlet 206c7c0720 Make `Actor._process_messages()` report disconnects
The method now returns a `bool` which flags whether the transport died
to the caller and allows for reporting a disconnect in the
channel-transport handler task. This is something a user will normally
want to know about on the caller side especially after seeing
a traceback from the peer (if in tree) on console.
2022-07-27 11:40:02 -04:00
Tyler Goodlet bf0ac3116c Only cancel/get-result from a ctx if transport is up
There's no point in sending a cancel message to the remote linked task
and especially no reason to block waiting on a result from that task if
the transport layer is detected to be disconnected. We expect that the
transport shouldn't go down at the layer of the message loop
(reconnection logic should be handled in the transport layer itself) so
if we detect the channel is not connected we don't bother requesting
cancels nor waiting on a final result message.

Why?

- if the connection goes down in error the caller side won't have a way
  to know "how long" it should block to wait for a cancel ack or result
  and causes a potential hang that may require an additional ctrl-c from
  the user especially if using the debugger or if the traceback is not
  seen on console.
- obviously there's no point in waiting for messages when there's no
  transport to deliver them XD

Further, add some more detailed cancel logging detailing the task and
actor ids.
2022-07-27 11:40:02 -04:00
Tyler Goodlet bb732cefd0 Drop high log level in ctx example 2022-07-27 11:40:02 -04:00
Tyler Goodlet 74b819a857 Typing fixes, simplify `_set_trace()` 2022-07-27 11:40:02 -04:00
Tyler Goodlet 8892204c84 Add notes around py3.10 stdlib bug from `pdb++`
There's a bug that's triggered in the stdlib without latest `pdb++`
installed; add a note for that.

Further inside `wait_for_parent_stdin_hijack()` don't `.started()` until
the interactor stream has been opened to avoid races when debugging this
`._debug.py` module (at the least) since we usually don't want the
spawning (parent) task to resume until we know for sure the tty lock has
been acquired. Also, drop the random checkpoint we had inside
`_breakpoint()`, not sure it was actually adding anything useful since
we're (mostly) carefully shielded throughout this func.
2022-07-27 11:40:02 -04:00
Tyler Goodlet 8f4bbf1cbf Add and use a pdb instance factory 2022-07-27 11:40:02 -04:00
Tyler Goodlet 21dccb2e79 A `.open_context()` example that causes a hang!
Finally! I think this may be the root issue we've been seeing in
production in a client project.

No idea yet why this is happening but the fault-causing sequence seems
to be:
- `.open_context()` in a child actor
- enter the debugger via `tractor.breakpoint()`
- continue from that entry via `c` command in REPL
- raise an error just after inside the context task's body

Looking at logging it appears as though the child thinks it has the tty
but no input is accepted on the REPL and a further `ctrl-c` results in
some teardown but also a further hang where both parent and child become
unresponsive..
2022-07-27 11:40:02 -04:00
Tyler Goodlet aea8f63bae Drop all the `@cm.__exit__()` override attempts..
None of it worked (you still will see `.__exit__()` frames on debugger
entry - you'd think this would have been solved by now but, shrug) so
instead wrap the debugger entry-point in a `try:` and put the SIGINT
handler restoration inside `MultiActorPdb` teardown hooks.

This seems to restore the UX as it was prior but with also giving the
desired SIGINT override handler behaviour.
2022-07-27 11:40:02 -04:00
Tyler Goodlet 7964a9f6f8 Try overriding `_GeneratorContextManager.__exit__()`; didn't work..
Using either of `@pdb.hideframe` or `__tracebackhide__` on stdlib
methods doesn't seem to work either.. This all seems to have something
to do with async generator usage I think ?
2022-07-27 11:40:02 -04:00
Tyler Goodlet 99c4319940 Fix example name typo 2022-07-27 11:40:02 -04:00
Tyler Goodlet e5195264a1 Handle a context cancel? Might be a noop 2022-07-27 11:40:02 -04:00
Tyler Goodlet 42f9d10252 Add a pre-started breakpoint example 2022-07-27 11:40:02 -04:00
Tyler Goodlet 345573e602 Make `mypy` happy 2022-07-27 11:40:02 -04:00
Tyler Goodlet 4e60c17375 Refine the handler for child vs. root cases
This gets very close to avoiding any possible hangs to do with tty
locking and SIGINT handling minus a special case that will be detailed
below.

Summary of implementation changes:

- convert `_mk_pdb()` -> `with _open_pdb() as pdb:` which implicitly
  handles the `bdb.BdbQuit` case such that debugger teardown hooks are
  always called.
- rename the handler to `shield_sigint()` and handle a variety of new
  cases:
  * the root is in debug but hasn't been cancelled -> call
    `Actor.cancel_soon()`
  * the root is in debug but *has* been called (`Actor.cancel_soon()`
    already called) -> raise KBI
  * a child is in debug *and* has a task locking the debugger -> ignore
    SIGINT in child *and* the root actor.
- if the debugger instance is provided to the handler at acquire time,
  on SIGINT handling completion re-print the last pdb++ REPL output so
  that the user realizes they are still actively in debug.
- ignore the unlock case where a race condition of "no task" holding the
  lock causes the `RuntimeError` normally associated with the "wrong
  task" doing so (not sure if this is a `trio` bug?).
- change debug logs to runtime level.

Unhandled case(s):

- a child is maybe in debug mode but does not itself have any task using
  the debugger.
    * ToDo: we need a way to decide what to do with
      "intermediate" child actors who themselves either are not in
      `debug_mode=True` but have children who *are* such that a SIGINT
      won't cause cancellation of that child-as-parent-of-another-child
      **iff** any of their children are in in debug mode.
2022-07-27 11:40:02 -04:00
Tyler Goodlet 6b7b58346f (facepalm) Reraise `BdbQuit` and discard ownerless lock releases 2022-07-27 11:40:02 -04:00
Tyler Goodlet 3cac323421 Add WIP while-debugger-active SIGINT ignore handler 2022-07-27 11:40:02 -04:00
goodboy 4902e184e9
Merge pull request #318 from goodboy/aio_error_propagation
Add context test that opens an inter-task-channel that errors
2022-07-15 12:42:19 -04:00
Tyler Goodlet 05790a20c1 Slight lint fixes 2022-07-15 11:18:48 -04:00
Tyler Goodlet 565c603300 Add nooz 2022-07-15 11:17:57 -04:00
Tyler Goodlet f0d78e1a6e Use local task ref, fixes `mypy` 2022-07-15 10:39:49 -04:00
Tyler Goodlet ce01f6b21c Increase timeout for CI/windows 2022-07-14 20:44:10 -04:00
Tyler Goodlet 0906559ed9 Drop manual stack construction, fix attr typo 2022-07-14 20:43:17 -04:00
Tyler Goodlet 38d03858d7 Fix `asyncio`-task-sync and error propagation
This fixes an previously undetected bug where if an
`.open_channel_from()` spawned task errored the error would not be
propagated to the `trio` side and instead would fail silently with
a console log error. What was most odd is that it only seems easy to
trigger when you put a slight task sleep before the error is raised
(:eyeroll:). This patch adds a few things to address this and just in
general improve iter-task lifetime syncing:

- add `LinkedTaskChannel._trio_exited: bool` a flag set from the `trio`
  side when the channel block exits.
- add a `wait_on_aio_task: bool` flag to `translate_aio_errors` which
  toggles whether to wait the `asyncio` task termination event on exit.
- cancel the `asyncio` task if the trio side has ended, when
  `._trio_exited == True`.
- always close the `trio` mem channel when the task exits such that
  the `asyncio` side can error on any next `.send()` call.
2022-07-14 16:35:41 -04:00
Tyler Goodlet 98de2fab31 Add context test that opens an inter-task-channel that errors 2022-07-14 16:13:12 -04:00
goodboy 80121ed211
Merge pull request #317 from goodboy/drop_msgpack
Drop `msgpack`
2022-07-12 13:31:45 -04:00
Tyler Goodlet 41983edc43 Use `str` | `bytes` union for typing msg dump 2022-07-12 11:59:11 -04:00
Tyler Goodlet 5168700fbf Tolerate non-decode-able bytes 2022-07-12 11:55:55 -04:00
Tyler Goodlet 673c4a8c66 Decode bytes prior to log msg 2022-07-12 11:55:55 -04:00
Tyler Goodlet 932b841176 Allow up to 4 `msgpsec` decode failures 2022-07-12 11:55:55 -04:00
Tyler Goodlet f594f1bdda Handle a connection reset on `msgspec` transport 2022-07-12 11:55:55 -04:00
Tyler Goodlet 53e3648eca Readme bump 2022-07-12 11:52:42 -04:00
Tyler Goodlet fc36503f4f Add nooz file 2022-07-12 11:43:10 -04:00
Tyler Goodlet 4e7ab54452 Appease `mypy` 2022-07-12 11:22:30 -04:00
goodboy 86d020d309
Merge pull request #316 from goodboy/310_windows
Try windows CI on py 3.10
2022-07-12 10:53:06 -04:00
Tyler Goodlet bb3f35cdd0 Drop `msgspec` specific CI jobs 2022-07-12 10:37:13 -04:00
Tyler Goodlet f94b7cd991 Drop `msgpack` lib and use `msgspec` for transport 2022-07-12 10:37:13 -04:00
Tyler Goodlet f6af5c7bf8 Drop `msgpack` dep, ensure `msgspec` as hard dep 2022-07-12 10:37:09 -04:00
Tyler Goodlet 9740a585d3 Add nooz for win on py3.10 2022-07-12 10:24:44 -04:00
Tyler Goodlet b700dc34a8 Use `pyreadline3` on windows for py3.10 2022-07-12 10:12:03 -04:00
Tyler Goodlet 9bc1c6f385 Try windows CI on py 3.10 2022-07-11 20:15:35 -04:00
goodboy f4973e90e9
Merge pull request #314 from goodboy/ci_sdist_install
Add an sdist install job
2022-07-11 20:13:24 -04:00
Tyler Goodlet 780e3dd13d Include ./docs/README.rst in src dist 2022-07-11 14:25:26 -04:00
Tyler Goodlet e0419b24ec Add an sdist install job
This should hopefully catch issues like,
https://github.com/goodboy/tractor/issues/293
2022-07-11 14:22:22 -04:00
goodboy 71f19f217d
Merge pull request #305 from goodboy/name_query
Add `tractor.query_actor()` an addr looker-upper
2022-04-13 09:19:26 -04:00
Tyler Goodlet 8901272854 Fix typing 2022-04-13 08:20:53 -04:00
Tyler Goodlet 7c151bed48 Add nooz 2022-04-13 08:18:11 -04:00
Tyler Goodlet 80897a8f2b Add `tractor.query_actor()` an addr looker-upper
Sometimes it's handy to just have a non-`Portal` yielding way
to figure out if a "service" actor is up, so add this discovery
helper for that. We'll prolly just leave it undocumented for
now until we figure out a longer-term/better discovery system.
2022-04-13 07:50:42 -04:00
goodboy 62983684d1
Merge pull request #308 from goodboy/sort_subs_results_infected_aio
Sort `.subscribe()` results before comparison in test
2022-04-12 20:06:55 -04:00
Tyler Goodlet 1c63bb6130 Sort fan out results before comparison in test 2022-04-12 19:49:36 -04:00
goodboy bfe99f29b8
Merge pull request #304 from goodboy/aio_explicit_task_cancels
`LinkedTaskChannel.subscribe()`, explicit `asyncio` task cancel logging, `test_trioisms.py`
2022-04-12 17:27:29 -04:00
Tyler Goodlet 9c27858aaf WIP prints to debug frickin windows 2022-04-12 16:48:50 -04:00
Tyler Goodlet 597ae4b690 Add nooz file 2022-04-12 15:59:33 -04:00
Tyler Goodlet fa354ffe2b Handle not all values pulled case 2022-04-12 15:51:06 -04:00
Tyler Goodlet 333fad8819 Facepalm: join nursery first to avoid channel-closed-too-early 2022-04-12 15:06:35 -04:00
Tyler Goodlet 90593611bb Add test for `LinkedTaskChannel.subscribe()` fanout feature 2022-04-12 15:06:35 -04:00
Tyler Goodlet 9c43bb28f1 Add a new "trioisms" test mod for tracking `trio` wishlist behaviour 2022-04-12 13:05:56 -04:00
Tyler Goodlet e45251db56 Simplify to form submitted to njs 2022-04-12 13:05:26 -04:00
Tyler Goodlet faf751acac WIP reproduce deadlock issue during error from piker 2022-04-12 13:04:46 -04:00
Tyler Goodlet 20d281f619 Run `mypy` on 3.10 2022-04-12 12:53:12 -04:00
Tyler Goodlet f3606d5bd8 Type fixes 2022-04-12 11:48:32 -04:00
Tyler Goodlet 032e14e326 Update new license info in setup script 2022-04-12 11:42:44 -04:00
Tyler Goodlet c322a193f2 Make `LinkedTaskChannel` trio-task-broadcastable with `.subscribe()` 2022-04-12 11:42:44 -04:00
Tyler Goodlet 46963c2e63 Don't handle `GeneratorExit` on `asyncio` tasks 2022-04-12 11:42:44 -04:00
Tyler Goodlet 9b77b8c9ee Add more explicit `asyncio` task error logging
When an `asyncio` side task errors or is cancelled we now explicitly
report the traceback and task name if possible as well as the source
reason for the error (some come from the `trio` side).

Further, properly set any `trio` side exception (after unwrapping it
from the `outcome.Error`) on the future that runs the `trio` guest run.
2022-04-12 11:42:44 -04:00
Tyler Goodlet 13c8300226 Add a sub-actor managed service nursery test scenario 2022-04-12 11:42:44 -04:00
goodboy 1109d96263
Merge pull request #303 from goodboy/fence_mp
Fence mp
2022-04-12 10:13:57 -04:00
Tyler Goodlet 65b4bc8888 Add misc nooz file 2022-04-12 08:35:13 -04:00
Tyler Goodlet bef9946f91 Allow re-running jobs from web UI manually? 2022-04-11 17:37:06 -04:00
Tyler Goodlet c30cece37a Fix one missing import/ref 2022-02-17 13:03:37 -05:00
Tyler Goodlet 509082c935 Port to new `msgspec` error type 2022-02-17 11:55:26 -05:00
Tyler Goodlet 75bb1added Avoid importing mp for as long as possible 2022-02-17 11:55:26 -05:00
goodboy 6e5590dad6
Merge pull request #300 from goodboy/msgpack_lists_by_default
Use lists by default like `msgspec`, update to latest `msgspec`  and `msgpack` releases
2022-02-15 09:08:20 -05:00
Tyler Goodlet 76a0492028 Fix type annot 2022-02-15 08:52:04 -05:00
Tyler Goodlet 4eab4a0213 Type fix 2022-02-15 08:51:25 -05:00
Tyler Goodlet 0edc6a26bc Go back to strict map keys 2022-02-15 08:48:43 -05:00
Tyler Goodlet c5acc3b969 Pack tuple keys as . delim strs in registry tests 2022-02-15 08:48:07 -05:00
Tyler Goodlet 17e195aacf They renamed to `msgpack` and the version is 1.0.3 2022-02-14 16:03:54 -05:00
Tyler Goodlet c65756ed80 Add nooz 2022-02-14 16:03:10 -05:00
Tyler Goodlet 927decc88d Pin to latest `msgspec` version 2022-02-14 14:14:05 -05:00
Tyler Goodlet 17bfa120cc Port to msgpec `0.4.0` imports 2022-02-14 14:05:55 -05:00
Tyler Goodlet 77ddc073e8 Use lists by default like `msgspec` 2022-02-09 10:07:33 -05:00
goodboy 26bebc42b7
Merge pull request #295 from goodboy/nspaths
`NamespacePath`: a message compatible "object reference" type
2022-01-30 12:40:05 -05:00
Tyler Goodlet 87de28fd88 Slight doc string update 2022-01-30 12:21:41 -05:00
Tyler Goodlet 56b29c27de Add msg serialization coding todo resources list 2022-01-30 12:19:21 -05:00
Tyler Goodlet adf9a1d0aa Add nooz 2022-01-30 12:17:32 -05:00
Tyler Goodlet 25a27e780d Add todo resources for eventual capability-based module filtering 2022-01-30 11:28:10 -05:00
Tyler Goodlet c265f3f94e Move namespace path type into `msg` mod 2022-01-30 11:27:34 -05:00
Tyler Goodlet 2900ceb003 Not all objects have a `.__name__` 2022-01-30 11:26:34 -05:00
Tyler Goodlet b6ae77b5ac Use `pkgutils.resolve_name()` and a `str` subtype
Python 3.9's new object resolver + a `str` is much simpler then mucking
with tuples (and easier to serialize). Include a `.to_tuple()` formatter
since we still are passing the module namespace and function name
separately inside the runtime's message format but in theory we might be
able to simplify this depending on how we would change the support for
`enable_modules:list[str]` in the spawn API.

Thanks to @Fuyukai for pointing `resolve_name()` which I didn't know
about before!
2022-01-30 11:26:34 -05:00
Tyler Goodlet 949cb2c9fe First draft "namespace path" named tuple; probably will discard 2022-01-30 11:26:34 -05:00
goodboy 094206ee9d
Merge pull request #298 from goodboy/experimental_subpkg
Add `tractor.experimental` subpkg
2022-01-29 19:19:50 -05:00
Tyler Goodlet debbf64d58 Add nooz 2022-01-29 17:58:58 -05:00
Tyler Goodlet 070e6ba459 Add `.experimental` subpkg to setup.py 2022-01-29 14:30:39 -05:00
Tyler Goodlet 7e004c0688 Add back blank `msg.py` 2022-01-29 14:22:15 -05:00
Tyler Goodlet ffe88de53b Better idea: start a `tractor.experimental` subpkg 2022-01-29 14:03:55 -05:00
Tyler Goodlet d29a915d48 Update mod doc string 2022-01-29 14:02:04 -05:00
Tyler Goodlet be87caa99b Move legacy pubsub stuff from `msg.py` to trionics mod 2022-01-29 14:02:04 -05:00
goodboy 0b51ebfe11
Merge pull request #284 from goodboy/maybe_cancel_the_cancel_
Cancel the `.cancel_actor()` request on proc death
2022-01-21 14:22:48 -05:00
Tyler Goodlet 4bf7992200 Bump to alpha 5 dev 2022-01-21 13:05:26 -05:00
Tyler Goodlet 41296448e8 Add nooz 2022-01-21 12:49:26 -05:00
Tyler Goodlet 9650055519 Use `.exitcode` which is poll + error handling 2022-01-21 12:49:26 -05:00
Tyler Goodlet 532974fb90 Drop leftover print 2022-01-21 12:49:26 -05:00
Tyler Goodlet b1d72b77c9 Patch mp procs with a `.poll()`
Not sure why they don't already expose this from the `Popen` backends
but, k.
2022-01-21 12:49:26 -05:00
Tyler Goodlet a2171c7e71 Cancel the `.cancel_actor()` request on proc death
Adjust the `soft_wait()` strategy to avoid sending needless cancel
requests if it is known that a child process is already terminated or
does so before the cancel request times out. This should be no slower
and should avoid needless waits on either closure-in-progress or already
closed channels.

Basic strategy is,
- request child actor to cancel
- if process termination is detected, cancel the cancel
- if the process is still alive after a cancel request timeout warn the
  user and yield back to the hard reap handling
2022-01-21 12:49:26 -05:00
goodboy 30986d6b64
Merge pull request #292 from goodboy/moar_timeoutz
Moar timeoutz
2022-01-21 12:49:01 -05:00
Tyler Goodlet 9ab04b1f6b One more increase for py3.10 2022-01-21 12:20:06 -05:00
Tyler Goodlet b3ff4b7804 Increase some timeouts for windows 2022-01-21 12:20:06 -05:00
goodboy d27bdbd40e
Merge pull request #291 from goodboy/drop_old_nooz_files
Drop old fragments that `towncrier` somehow missed
2022-01-21 12:17:46 -05:00
Tyler Goodlet a95b0dc05e Drop old fragments that `towncrier` somehow missed 2022-01-21 12:09:16 -05:00
goodboy 909c996346
Merge pull request #289 from houtenjack/remove_asyncio_todo
Remove asyncio from TODOs
2022-01-19 16:26:10 -05:00
Giacomo Camporini adc8e5c009 Revert "Added asyncio with trio guest mode feature in feature list"
This reverts commit 5eed85d5dd.
2022-01-19 09:24:08 +01:00
Giacomo Camporini 5eed85d5dd Added asyncio with trio guest mode feature in feature list 2022-01-18 14:25:08 +01:00
Giacomo Camporini 137fed790f Remove asyncio from TODOs
Fixes #286
2022-01-05 11:40:06 +01:00
goodboy 96123f21d2
Merge pull request #285 from overclockworked64/fix-min-version
fix: bump min version
2021-12-26 20:56:50 -05:00
overclockworked64 338aa2b74a
fix: bump min version 2021-12-26 06:37:18 +01:00
goodboy 89551ef371
Merge pull request #282 from goodboy/win_ci_timeout
Lengthen win CI run to 12m
2021-12-20 09:39:44 -05:00
Tyler Goodlet 884bdf2d57 Lengthen win CI run to 12m 2021-12-20 09:25:38 -05:00
goodboy 343b2803b5
Merge pull request #280 from goodboy/alpha4
Alpha4: infect all the `asyncio`s
2021-12-18 20:15:28 -05:00
Tyler Goodlet cdef579d22 Update release tips 2021-12-18 16:22:13 -05:00
Tyler Goodlet f9400b4beb Bump date, fix bullet lists, other typos. 2021-12-18 15:50:01 -05:00
Tyler Goodlet 45cdf25f14 Add re-license bullet, fix pluggy links XD 2021-12-17 12:48:58 -05:00
Tyler Goodlet 0a5818fe05 Gen an alpha4 changelog using modified `pluggy` towncrier template 2021-12-17 12:21:11 -05:00
Tyler Goodlet fd6e18eba4 Bump version to alpha4, change towncrier news dir name 2021-12-17 11:35:37 -05:00
goodboy bbcdbaaba4
Merge pull request #121 from goodboy/infect_asyncio
Infect `asyncio`
2021-12-17 11:02:01 -05:00
Tyler Goodlet 9b4cdb00e6 Add agpl header 2021-12-17 09:39:30 -05:00
Tyler Goodlet 9b14d82086 Add nooz 2021-12-17 09:39:30 -05:00
Tyler Goodlet 4d1a48a47b Link to inter-loop channel issue in readme 2021-12-17 09:39:30 -05:00
Tyler Goodlet 4c0cfa68ac Link to SC on wikipedia 2021-12-17 09:39:28 -05:00
Tyler Goodlet 73d252e09e Emphasize `asyncio` only with sleeps 2021-12-17 09:38:54 -05:00
Tyler Goodlet 1fdcaf36f3 Not enough time for new asyncio tests? 2021-12-17 09:38:54 -05:00
Tyler Goodlet 6952c7defa Add features bullet, slip in a guille-ism 2021-12-17 09:38:52 -05:00
Tyler Goodlet 7237d696ce Add asyncio echo server ex to readme; fix cluster section 2021-12-17 09:38:05 -05:00
Tyler Goodlet b463841019 Add infected `asyncio` echo server example 2021-12-17 09:38:04 -05:00
Tyler Goodlet d65912e1ae Increase kbi delay in remote cancel test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 24078f2d6e More doc string style tweaks 2021-12-17 09:38:04 -05:00
Tyler Goodlet 56cc98375e Return channel type from `_run_asyncio_task()`
Better encapsulate all the mem-chan, Queue, sync-primitives inside our
linked task channel in order to avoid `mypy`'s complaints about monkey
patching. This also sets footing for adding an `asyncio`-side channel
API that can be used more like this `trio`-side API.
2021-12-17 09:38:04 -05:00
Tyler Goodlet 9a2de90de6 Add mid stream echoserver "bail" cases 2021-12-17 09:38:04 -05:00
Tyler Goodlet 2b9b29eb71 Add an asyncio echo server test 2021-12-17 09:38:04 -05:00
Tyler Goodlet b69412a903 Drop cancel scope from linked task channel 2021-12-17 09:38:04 -05:00
Tyler Goodlet c4b3bb354e Port tests to handle our new `asyncio` cancelled type 2021-12-17 09:38:04 -05:00
Tyler Goodlet 6803891bd7 Collect `asyncio` task exceptions to avoid warning msg 2021-12-17 09:38:04 -05:00
Tyler Goodlet 5f4094691d Re-wrap and raise `asyncio.CancelledError`
For whatever reason `trio` seems to be swallowing this exception when
raised in the `trio` task so instead wrap it in our own non-base
exception type: `AsyncioCancelled` and raise that when the `asyncio`
task cancels itself internally using `raise <err> from <src_err>` style.

Further don't bother cancelling the `trio` task (via cancel scope)
since we we can just use the recv mem chan closure error as a signal
and explicitly lookup any set asyncio error.
2021-12-17 09:38:04 -05:00
Tyler Goodlet c48c68c0bc Flip doc strings to my preferred format 2021-12-17 09:38:04 -05:00
Tyler Goodlet ad2567dd73 Add first set of interloop streaming tests 2021-12-17 09:38:04 -05:00
Tyler Goodlet 44d0e9fc32 Add a `LinkedTaskChannel` for synced inter-loop-streaming
Wraps the pairs of underlying `trio` mem chans and the `asyncio.Queue`
with this new composite which will be delivered from `open_channel_from()`.
This allows for both sending and receiving values from the `asyncio`
task (2 way msg passing) as well controls for cancelling or waiting on
the task.

Factor `asyncio` translation and re-raising logic into a new closure
which is run on both `trio` side error handling as well as on normal
termination to avoid missing `asyncio` errors even when `trio` task
cancellation is handled first.

Only close the `trio` mem chans on `trio` task termination *iff*
the task was spawned using `open_channel_from()`:
- on `open_channel_from()` exit, mem chan closure is the desired semantic
- on `run_task()` we normally only return a single value or error and
  if the channel is closed before the error is raised we may propagate
  a `trio.EndOfChannel` instead of the desired underlying `asyncio`
  task's error
2021-12-17 09:38:04 -05:00
Tyler Goodlet d27ddb7bbb Add a basic `open_channel_from()` streaming test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 9bc94b5ccc Factor error translation into a ctx mngr
Pull the common `asyncio` -> `trio` error translation logic into
a common context manager and don't expect a final result to be captured
when using `open_channel_from()` since it's a manager interface and it
would be clunky to try and deliver some "final result" after exit.
2021-12-17 09:38:04 -05:00
Tyler Goodlet e6687bcdc4 Serious-ify doc string 2021-12-17 09:38:04 -05:00
Tyler Goodlet e815f766f6 Add a cancelled-from-remote-trio-task case 2021-12-17 09:38:04 -05:00
Tyler Goodlet c19123b588 Add trio-cancels-anursery-cancels-aio test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 8704664719 Reverse the order for asyncio cancelleds? I dunno why 2021-12-17 09:38:04 -05:00
Tyler Goodlet 04c0eda69d Add an `asyncio`-internal cancel test
Verify that if the `asyncio` side task cancels (itself) that we raise
that `asyncio.CancelledError` on the `trio` side.  In the case where
`trio` initiated the cancel whether or not the `asyncio` side ended up
raising `CancelledError` doesn't really matter to us as long as the far
task did indeed terminate.
2021-12-17 09:38:04 -05:00
Tyler Goodlet 1114b6980e Adjust linked-loop-task tear down sequence
Close the mem chan before cancelling the `trio` task in order to ensure
we retrieve whatever error is shuttled from `asyncio` before the channel
read is potentially cancelled (previously a race?).

Handle `asyncio.CancelledError` specially such that we raise it directly
(instead of `raise aio_cancelled from other_err`) since it *is* the
source error in the case where the cancellation is `asyncio` internal.
2021-12-17 09:38:04 -05:00
Tyler Goodlet 56357242e9 Add a `Portal.cancel_actor()` test 2021-12-17 09:38:04 -05:00
Tyler Goodlet 0ab5e5cadd Fill out nursery docstring 2021-12-17 09:38:04 -05:00
Tyler Goodlet 06fa650ed0 Drop runtime logging for asyncio mode 2021-12-17 09:38:04 -05:00
Tyler Goodlet 446feff172 Clean type imports 2021-12-17 09:38:04 -05:00
Tyler Goodlet 299e4192b0 Plan asyncio test set 2021-12-17 09:38:04 -05:00
Tyler Goodlet 41eddffc2c Drop old (and deluded) "streaming" cruft 2021-12-17 09:38:04 -05:00
Tyler Goodlet 7a65165279 Facepalm, re-raise captured `asyncio` task error 2021-12-17 09:38:04 -05:00
Tyler Goodlet b376b7cd32 First draft: `.to_asyncio.open_channel_from()` 2021-12-17 09:38:04 -05:00
Tyler Goodlet c262b1a3e8 Always cancel the asyncio task? 2021-12-17 09:38:04 -05:00
Tyler Goodlet d9dac3f36c Drop old implementation cruft 2021-12-17 09:38:04 -05:00
Tyler Goodlet 325c0cdb1b Fix error propagation on asyncio streaming tasks 2021-12-17 09:38:04 -05:00
Tyler Goodlet 55e210fec6 Drop bad .close() call 2021-12-17 09:38:04 -05:00
Tyler Goodlet aa24bbc11c Proxy asyncio cancelleds as well 2021-12-17 09:38:04 -05:00
Tyler Goodlet 793bcfb7d4 Pass `infect_asyncio` flag to mp actors as well 2021-12-17 09:38:04 -05:00
Tyler Goodlet d80f8d7a39 WIP redo asyncio async gen streaming 2021-12-17 09:38:04 -05:00
Tyler Goodlet 340effae11 Add initial infected asyncio error propagation test 2021-12-17 09:38:01 -05:00
Tyler Goodlet 509ae132ec Raise any asyncio errors if in trio task on cancel 2021-12-17 09:38:01 -05:00
Tyler Goodlet 80f47dece2 Raise from asyncio error; fixes mypy 2021-12-17 09:38:01 -05:00
Tyler Goodlet 2cf87146a3 Log any asyncio error 2021-12-17 09:38:01 -05:00
Tyler Goodlet 8070b16bd0 Support asyncio actors with the trio spawner backend 2021-12-17 09:38:01 -05:00
Tyler Goodlet 1406ddc5ee Add `infect_asyncio: bool` flag to nursery methods 2021-12-17 09:37:41 -05:00
Tyler Goodlet 055788cf16 Attempt to make mypy happy.. 2021-12-17 09:19:23 -05:00
Tyler Goodlet 1825b21d2c Wow, fix all the broken async func invoking code..
Clearly this wasn't developed against a task that spawned just an async
func in `asyncio`.. Fix all that and remove a bunch of unnecessary func
layers. Add provisional support for the target receiving the `to_trio`
and `from_trio` channels and for the @tractor.stream marker.
2021-12-17 09:19:23 -05:00
Tyler Goodlet acd63d0c89 First draft "infected `asyncio` mode"
This should mostly maintain top level SC principles for any task spawned
using `tractor.to_asyncio.run()`. When the `asyncio` task completes make
sure to cancel the pertaining `trio` cancel scope and raise any error
that may have resulted. This interface uses `trio`'s "guest-mode" to run
`asyncio` loop using a special entrypoint which is handed to Python
during process spawn.
2021-12-17 09:17:59 -05:00
goodboy cdf1f8c2f7
Merge pull request #276 from goodboy/expected_ctx_cancelled
Expected ctx cancelled should not override a source error
2021-12-17 08:08:18 -05:00
Tyler Goodlet 8eff788d2d Pin to previous `trio_typing` release 2021-12-16 19:59:10 -05:00
Tyler Goodlet 916e27eedc Adjust cancelled test to expect raised overrun error 2021-12-16 19:59:10 -05:00
Tyler Goodlet 98a830ccba Drop cancel traceback capture; don't seem to need it? 2021-12-16 19:59:10 -05:00
Tyler Goodlet 8c004c1f36 Add an explicit messaging error for reporting an illegal context transaction 2021-12-16 19:59:10 -05:00
Tyler Goodlet e2139c2bf0 Don't set `Context._error` to expected `ContextCancelled`
If the one side of an inter-actor context cancels the other then that
side should always expect back a `ContextCancelled` message. However we
should not set this error in this case (where the cancel request was
sent and a `ContextCancelled` msg was received back) since it may
override some other error that caused the cancellation request to be
sent out in the first place. As an example when a context opens another
context to a peer and some error happens which causes the second peer
context to be cancelled but we want to propagate the original error.

Fixes the issue found in https://github.com/pikers/piker/issues/244
2021-12-16 19:59:10 -05:00
Tyler Goodlet 9650b010de Add a test for the real issue: error overriding
The underlying issue is actually that a nested `Context` which was
cancelled was overriding the local error that triggered that secondary's
context's cancellation in the first place XD. This test catches that
case.

Relates to https://github.com/pikers/piker/issues/244
2021-12-16 19:59:10 -05:00
Tyler Goodlet 5d424e3703 Hide the key error tb on remote starting errors 2021-12-16 19:59:10 -05:00
Tyler Goodlet c38d0f826e Add an unserializable value causes error before started test 2021-12-16 19:59:10 -05:00
goodboy 4001d2c3fc
Merge pull request #257 from goodboy/context_caching
Add `maybe_open_context()` an actor wide task-resource cache
2021-12-16 19:55:14 -05:00
Tyler Goodlet 953d15b67d Add nooz 2021-12-16 18:02:03 -05:00
Tyler Goodlet da5e36bf0c Revert back to avoiding key errors on cancellation 2021-12-16 18:02:03 -05:00
Tyler Goodlet 21a9c47496 Parameterize over cache keying methods: kwargs and "key" 2021-12-16 18:02:03 -05:00
Tyler Goodlet 67dc0d014c Add basic `maybe_open_context()` caching test 2021-12-16 18:02:03 -05:00
Tyler Goodlet 9b1d8bf7b0 Of course, increase the timeout for windows.. 2021-12-16 18:02:03 -05:00
Tyler Goodlet 26394dd8df Type annot fixes 2021-12-16 18:02:03 -05:00
Tyler Goodlet 11e64426f6 Wake all sleeping consumers on bcaster closure 2021-12-16 18:02:03 -05:00
Tyler Goodlet 213447008b Add draft code for waiting on all nurseries in root 2021-12-16 18:02:03 -05:00
Tyler Goodlet f617da6ff1 Add timeout around test and prints for guidance 2021-12-16 18:02:03 -05:00
Tyler Goodlet 52627a6326 Rework interface: pass func and kwargs
After more extensive testing I realized that keying on the context
manager *instance id* isn't going to work since each entering task is
going to create a unique key XD

Instead pass the manager function as `acm_func` and optionally allow
keying the resource on the passed `kwargs` (if hashable) or the
`key:str`. Further, pass the key to the enterer task and avoid
a separate keying scheme for the manager versus the value it delivers.
Don't bother with checking and releasing the lock in `finally:` block,
it should be an error if it's still locked.
2021-12-16 18:02:03 -05:00
Tyler Goodlet 3826bc9972 Don't catch key errors from the yielded to scope 2021-12-16 18:02:03 -05:00
Tyler Goodlet b210278e2f Naming change `cache` -> `_Cache` 2021-12-16 18:02:03 -05:00
Tyler Goodlet 4a0252baf2 Add task-cached stream test 2021-12-16 18:02:03 -05:00
Tyler Goodlet ac22b4a875 Fix type annots in resource cacher internals 2021-12-16 18:02:03 -05:00
Tyler Goodlet 5f41dbf34f Add `maybe_open_context()` an actor wide task-resource cache 2021-12-16 18:02:03 -05:00
goodboy 2d6fbd5437
Merge pull request #278 from goodboy/end_of_channel_fixes
End of channel fixes for streams and broadcasting
2021-12-16 18:01:04 -05:00
Tyler Goodlet 325e550ff3 Add nooz 2021-12-16 17:30:18 -05:00
Tyler Goodlet b5d62909ff Pin to `mypy` 0.910
Avoids the issue noted in
https://github.com/python-trio/trio-typing/issues/50
to keep CI green.
2021-12-16 16:20:07 -05:00
Tyler Goodlet 57f2aca18c Set eoc on closure (again) 2021-12-16 16:19:15 -05:00
Tyler Goodlet 1652716574 Add timeout to streaming test 2021-12-16 16:19:09 -05:00
Tyler Goodlet f2ba961e81 Mark stream with EOC when stop message is received 2021-12-16 16:18:58 -05:00
Tyler Goodlet 79d63585b0 Add a multi-task fan out streaming test
This actually catches a lot of bugs to do with stream termination and
``MsgStream.subscribe()`` usage where the underlying stream closes from
the producer side. When this passes the broadcaster logic will have to
ensure non-lossy fan out semantics and closure tracking.
2021-12-16 16:16:23 -05:00
Tyler Goodlet 3deb1b91e6 Wake all broadcast consumers on EOC
Without this wakeup you can have tasks which re-enter `.receive()`
and get stuck waiting on the wakeup event indefinitely. Whenever
a ``trio.EndOfChannel`` arrives we want to make sure all consumers
at least know about it and don't block. This previous behaviour was
basically a bug.

Add some state flags for tracking if the broadcaster was either
cancelled or terminated via EOC mostly for testing and debugging
purposes though this info might be useful if we decide to offer
a `.statistics()` like API in the future.
2021-12-16 16:16:14 -05:00
Tyler Goodlet 61e134dc5d Wake up consumers on end of channel as well 2021-12-16 16:15:54 -05:00
goodboy cfdc95fe7f
Merge pull request #275 from goodboy/agpl_commit_msg_fix
Re-license as AGPLv3
2021-12-14 23:51:30 -05:00
Tyler Goodlet 6f94ffc304 Re-license code base for distribution under AGPL
This commit obviously denotes a re-license of all applicable parts of
the code base. Acknowledgement of this change was completed in #274 by
the majority of the current set of contributors. From here henceforth
all changes will be AGPL licensed and distributed. This is purely an
effort to maintain the same copy-left policy whilst closing the
(perceived) SaaS loophole the GPL allows for. It is merely for this
loophole: to avoid code hiding by any potential "network providers" who
are attempting to use the project to make a profit without either
compensating the authors or re-distributing their changes.

I thought quite a bit about this change and can't see a reason not to
close the SaaS loophole in our current license. We still are (hard)
copy-left and I plan to keep the code base this way for a couple
reasons:

- The code base produces income/profit through parent projects and is
  demonstrably of high value.
- I believe firms should not get free lunch for the sake of
  "contributions from their employees" or "usage as a service" which
  I have found to be a dubious argument at best.
- If a firm who intends to profit from the code base wants to use it
  they can propose a secondary commercial license to purchase with the
  proceeds going to the project's authors under some form of well
  defined contract.
- Many successful projects like Qt use this model; I see no reason it
  can't work in this case until such a time as the authors feel it
  should be loosened.

There has been detailed discussion in #103 on licensing alternatives.
The main point of this AGPL change is to protect the code base for the
time being from exploitation while it grows and as we move into the next
phase of development which will include extension into the multi-host
distributed software space.
2021-12-14 23:33:27 -05:00
goodboy 56297cf25c
Merge pull request #271 from goodboy/debug_flag_per_actor
Debug flag per actor
2021-12-11 20:10:21 -05:00
Tyler Goodlet 94f098e5f7 Add nooz 2021-12-10 13:08:20 -05:00
Tyler Goodlet 949aa9c405 Lol. should probably push the example code... 2021-12-10 12:48:05 -05:00
Tyler Goodlet a38a983225 Increase debugger poll delay back to prior value
If we make it too fast a nursery with debug mode children can cancel
too fast and causes some test failures. It's likely not a huge deal
anyway since the purpose of this poll/check is for human interaction
and the current delay isn't really that noticeable.

Decrease log levels in the debug module to avoid console noise when in
use. Toss in some more detailed comments around the new debugger lock
points.
2021-12-10 11:54:27 -05:00
Tyler Goodlet 4f411d6926 Add a per actor debug mode test 2021-12-09 17:53:31 -05:00
Tyler Goodlet 9bee513136 Use manual debugger-in-use flag in nursery and spawn task 2021-12-09 17:53:29 -05:00
Tyler Goodlet 5d9e3d1163 Add a manual debug mode kwarg to debugger waiter 2021-12-09 17:52:35 -05:00
Tyler Goodlet 95c52436e5 Adjust multi-actor debugger test
It turns out recent improvements have made the debugger too good
so we need to just terminate the continue loop in this test when
we finally see the "spawn error" crash out because the breakpoint
forever case will literally, continue forever XD
2021-12-07 16:46:03 -05:00
Tyler Goodlet e51c0e17a2 Properly set console logging in test suite 2021-12-07 13:17:10 -05:00
Tyler Goodlet 92c6ec1882 `get_loglevel()` always returns a str 2021-12-07 13:17:00 -05:00
Tyler Goodlet 72eef2a4a1 Config debug mode log level *after* initial setup 2021-12-07 13:16:07 -05:00
Tyler Goodlet 205e254072 Make test suite use default log level 2021-12-07 13:13:40 -05:00
Tyler Goodlet 9bd5226e76 Only adjust logging in debug mode if not noisy enough already 2021-12-07 13:13:04 -05:00
Tyler Goodlet e899cc42bf Add per actor debug mode toggle 2021-12-07 13:11:06 -05:00
goodboy f7c9056419
Merge pull request #261 from goodboy/stricter_context_starting
`Context` oriented error relay and `MsgStream` overruns
2021-12-07 11:22:48 -05:00
Tyler Goodlet faaecbf810 Add nooz 2021-12-07 11:11:50 -05:00
Tyler Goodlet 703dee8a59 Add stream open before started, detailed semantics comment 2021-12-07 09:48:35 -05:00
Tyler Goodlet df59071747 Bleh cast to list for `msgpack` 2021-12-06 18:07:14 -05:00
Tyler Goodlet 4856285dee Add back broken send chan ignore block 2021-12-06 17:04:17 -05:00
Tyler Goodlet efba5229fc Move context-streaming operational tests into one mod 2021-12-06 16:45:44 -05:00
Tyler Goodlet fd6f4574ce Rename test mod 2021-12-06 16:38:27 -05:00
Tyler Goodlet 52a2b7a5ed Bump windows timeout again 2021-12-06 16:32:23 -05:00
Tyler Goodlet 63ecae70c4 Add a basic no-errors-when-backpressure stream test 2021-12-06 16:32:23 -05:00
Tyler Goodlet 4b40599c48 Fix ignore warning log message 2021-12-06 16:32:23 -05:00
Tyler Goodlet a79cdc7b44 Make cancel case expect multi-error 2021-12-06 16:32:23 -05:00
Tyler Goodlet c9132de7dc Move maybe-raise-error-msg logic into context
A context method handling all this logic makes the most sense since it
contains all the state related to whether the error should be raised in
a nursery scope or is expected to be raised by a consumer task which
reads and processes the msg directly (via a `Portal` API call). This
also makes it easy to always process remote errors even when there is no
(stream) overrun condition.
2021-12-06 16:32:23 -05:00
Tyler Goodlet 1f8e1cccbb Only pop contexts on decorated entrypoints 2021-12-06 13:48:19 -05:00
Tyler Goodlet 58805a0430 Slight delay to avoid flaky bcast race 2021-12-06 12:17:37 -05:00
Tyler Goodlet 142083d81b Don't cancel the context on overrun cases 2021-12-06 11:54:21 -05:00
Tyler Goodlet 318027ebd1 Raise stream overruns on one side never opened
A context stream overrun should normally never take place since if
a stream is opened (via ``Context.open_stream()``) backpressure is
applied on the message buffer (unless explicitly disabled by the
``backpressure=False`` flag) such that an overrun on the receiving task
should result in blocking the (remote) sender task (eventually depending
on the underlying ``MsgStream`` transport).

Here we add a special error message that reports if one side never
opened a stream and let's the user know in the overrun error message
that they may be trying to push messages to a task that isn't ready to
receive them.

Further fixes / details:
- pop any `Context` at the end of any `_invoke()` task that creates
  one and registers with the runtime.
- ignore but warn about messages received for a context that either
  no longer exists or is unknown (guarding against crashes by malicious
  packets in the latter case)
2021-12-06 11:54:21 -05:00
Tyler Goodlet b826ec8103 Better idea, enable backpressure on opened streams
Keeping it disabled on context open will help with detecting any stream
connection which was never opened on one side of the task pair.  In that
case we can report that there was an overrun **and** a stream wasn't
opened versus if the stream is explicitly configured not to use bp then
we throw the standard overflow.

Use `trio.Nursery._closed` to detect "closure" XD since it seems to be
the most reliable way to determine if a spawn call will trigger
a runtime error.
2021-12-06 11:54:21 -05:00
Tyler Goodlet 4ea5c9b5db Pop context on `.open_context()` exit 2021-12-06 11:54:21 -05:00
Tyler Goodlet f3432bd8fb Enable bp on clustering test 2021-12-05 20:02:55 -05:00
Tyler Goodlet 41a3e6a9ca Type check fixes 2021-12-05 20:00:58 -05:00
Tyler Goodlet 7b9d410c4d Adjust remaining examples and tests for non-backpressure default 2021-12-05 19:52:09 -05:00
Tyler Goodlet 2b05ffcc23 Add context stream overrun tests 2021-12-05 19:50:39 -05:00
Tyler Goodlet 185dbc7e3f Disable msg stream backpressure by default
Half of portal API usage requires a 1 message response (`.run()`,
`.run_in_actor()`) and the streaming APIs should probably be explicitly
enabled for backpressure if desired by the user. This makes more sense
in (psuedo) realtime systems where it's better to notify on a block then
freeze without notice. Make this default behaviour with a new error to
be raised: `tractor._exceptions.StreamOverrun` when a sender overruns
a stream by the default size (2**6 for now). The old behavior can be
enabled with `Context.open_stream(backpressure=True)` but now with
warning log messages when there are overruns.

Add task-linked-context error propagation using a "nursery raising"
technique such that if either end of context linked pair of tasks
errors, that error can be relayed to other side and raised as a form of
interrupt at the receiving task's next `trio` checkpoint. This enables
reliable error relay without expecting the (error) receiving task to
call an API which would raise the remote exception (which it might never
currently if using `tractor.MsgStream` APIs).

Further internal implementation details:
- define the default msg buffer size as `Actor.msg_buffer_size`
- expose a `msg_buffer_size: int` kwarg from `Actor.get_context()`
- maybe raise aforementioned context errors using
  `Context._maybe_error_from_remote_msg()` inside `Actor._push_result()`
- support optional backpressure on a stream when pushing messages
  in `Actor._push_result()`
- in `_invote()` handle multierrors raised from a `@tractor.context`
  entrypoint as being potentially caused by a relayed error from the
  remote caller task, if `Context._error` has been set then raise that
  error inside the `RemoteActorError` that will be relayed back to that
  caller more or less proxying through the source side error back to its
  origin.
2021-12-05 19:31:41 -05:00
Tyler Goodlet 2680a9473d Always set `Context._portal` on the caller task side 2021-12-05 19:28:00 -05:00
Tyler Goodlet 92b540d518 Add internal msg stream backpressure controls
In preparation for supporting both backpressure detection (through an
optional error) as well as control over the msg channel buffer size, add
internal configuration flags for both to contexts. Also adjust
`Context._err_on_from_remote_msg()` -> `._maybe..` such that it can be
called and will only raise if a scope nursery has been set. Add
a `Context._error` for stashing the remote task's error that may be
delivered in an `'error'` message.
2021-12-05 19:19:53 -05:00
Tyler Goodlet 6751349987 Add a stream overrun exception 2021-12-05 18:28:02 -05:00
Tyler Goodlet d307eab118 Rework `Actor.send_cmd()` to `.start_remote_task()`
This more formally declares the runtime's remote task startingn API
and uses it throughout all the dependent `Portal` API methods.
Allows dropping `Portal._submit()` and simplifying `.run_in_actor()`
style result waiting to be delegated to the context APIs at remote
task `return` response time. We now also track the remote entrypoint
"type` as `Context._remote_func_type`.
2021-12-04 18:20:43 -05:00
Tyler Goodlet 872b24aedd Prove we've fixed #265 2021-12-03 14:49:55 -05:00
Tyler Goodlet c5c3f7e789 Use `tractor.Context` throughout the runtime core
Instead of tracking feeder mem chans per RPC dialog, store `Context`
instances which (now) hold refs to the underlying RPC-task feeder chans
and track them inside a `Actor._contexts` map. This begins a transition
to making the "context" idea the primitive abstraction for representing
messaging dialogs between tasks in different memory domains (i.e.
usually separate processes).

A slew of changes made this possible:
- change `Actor.get_memchans()` -> `.get_context()`.
- Add new `Context._send_chan` and `._recv_chan` vars.
- implicitly create a new context on every `Actor.send_cmd()` call.
- use the context created by `.send_cmd()` in `Portal.open_context()`
  instead of manually creating one.
- call `Actor.get_context()` inside tasks run from `._invoke()`
  such that feeder chans are implicitly created for callee tasks
  thus fixing the bug #265.

NB: We might change some of the internal semantics to do with *when* the
feeder chans are actually created to denote whether or not a far end
task is actually *read to receive* messages. For example, in the cases
where it **never** will be ready to receive messages (one-way streaming,
a context that never opens a stream, etc.) we will likely want some kind
of error or at least warning to the caller that messages can't be sent
(yet).
2021-12-03 14:49:55 -05:00
Tyler Goodlet 3f6099f161 Add a double started error checking test 2021-12-03 10:08:55 -05:00
Tyler Goodlet 568902a5a9 Add test for #265: "msg sent before stream opened"
This always triggered the mentioned race condition.
We need to figure out the best approach to avoid this case.
2021-12-03 10:08:55 -05:00
Tyler Goodlet f4793af2b9 Error on mal-use of `Context.started()`
Previously we were ignoring a race where the callee an opened task
context could enter `Context.open_stream()` before calling `.started().
Disallow this as well as calling `.started()` more then once.
2021-12-03 10:08:55 -05:00
goodboy ae6d751d71
Merge pull request #267 from goodboy/acked_remote_cancels
Acked remote cancels
2021-12-03 09:51:41 -05:00
Tyler Goodlet 94a3cc532c Add nooz 2021-12-02 18:09:07 -05:00
Tyler Goodlet 08e9593306 Suppress broken resources errors in `Portal.cancel_actor()` 2021-12-02 15:29:04 -05:00
Tyler Goodlet 14f84571fb Don't cancel receive streams inside `.cancel_actor()`
We don't need to any more presuming you get ideal remote cancellation
conditions where the remote actor should teardown and kill the streams
from its end.
2021-12-02 15:29:04 -05:00
Tyler Goodlet e561a4908f Appease mypy 2021-12-02 15:29:04 -05:00
Tyler Goodlet a29924f330 Don't assume exception order from nursery 2021-12-02 08:45:58 -05:00
Tyler Goodlet 46070f99de Factor soft-wait logic into a helper, use with mp 2021-12-02 08:18:04 -05:00
Tyler Goodlet d81eb1a51e Finally, deterministic remote cancellation support
On msg loop termination we now check and see if a channel is associated
with a child-actor registered in some local task's nursery. If so, we
attempt to wait on channel closure initiated from the child side (by
draining the underlying msg stream) so as to avoid closing it too early
resulting in the child not relaying its termination status response. This
means we now support the ideal case in 2-general's where we get back the
ack to the closure request instead of just ignoring it and timing out XD

The main implementation detail is that when `Portal.cancel_actor()`
remotely calls `Actor.cancel()` we actually wait for the RPC response
from that request before allowing the channel shutdown sequence to
engage. The new msg stream draining support enables this.

Also, factor child-to-parent error propagation logic into a helper func
and improve some docs (yeah yeah y'all don't like the ''', i don't
care - it makes my eyes not hurt).
2021-12-02 08:18:04 -05:00
Tyler Goodlet d817f1a658 Add a nursery "exited" signal
Use a `trio.Event` to enable nursery closure detection such that core
runtime tasks can be notified when a local nursery exits and allow
shutdown protocols to operate without close-before-terminate issues
(such as IPC channel closure during remote peer cancellation).
2021-12-02 08:18:04 -05:00
Tyler Goodlet a23afb0bb8 Set channel cancel called flag on cancel requests 2021-12-02 08:18:04 -05:00
Tyler Goodlet 1976e61d1a Add `.drain()` support to msg streams
Enables "draining" the last set of messages after a channel/stream has
been terminated mostly for the purposes of receiving a final ACK to
a remote cancel command. Also, add an internal `Channel._cancel_called`
flag which can be set by `Portal.cancel_actor()`.
2021-12-02 08:18:04 -05:00
Tyler Goodlet 0ac3397dbb Only soft-acquire debug lock if a proc was spawned 2021-12-02 08:17:03 -05:00
Tyler Goodlet 62b2867e07 Tweak doc strings 2021-12-02 08:16:49 -05:00
Tyler Goodlet bf6958cdbe Handle cancelled-before-proc-created spawn case
It's definitely possible to have a nursery spawn task be cancelled
before a `trio.Process` handle is ever returned; we now handle this
case as a cancelled-during-spawn scenario. Zombie collection logic
also is bypassed in this case.
2021-12-02 08:16:05 -05:00
goodboy d05885d650
Merge pull request #266 from goodboy/faster_daemon_cancels
Faster graceful daemon cancels
2021-11-30 09:29:13 -05:00
Tyler Goodlet 77fc705b1f Add nooz 2021-11-29 22:52:19 -05:00
Tyler Goodlet 16a3321a38 Increase timeout for windows.. 2021-11-29 21:52:30 -05:00
Tyler Goodlet 7eb465a699 Graceful cancel actors before hard reaping 2021-11-29 16:03:23 -05:00
Tyler Goodlet 121f7fd844 Draft test that shows a slow daemon cancellation
Currently if the spawn task is waiting on a daemon actor it is likely in
`await proc.wait()`, however, if the actor nursery is subsequently
cancelled this checkpoint will be abandoned and the hard proc reaping
sequence will execute which results in a up to 3 second wait before
a "hard" system signal is sent to the child.  Ideally such
a cancelled-during-daemon-actor-wait condition is instead handled by
first trying to cancel the remote actor using `Portal.cancel_actor()` (a
"graceful" remote cancel request) which should (presuming normal runtime
operation) result in an immediate collection of the process after normal
actor (remotely triggered) runtime cancellation.
2021-11-29 16:03:14 -05:00
goodboy ac821bdd94
Merge pull request #264 from goodboy/runinactor_none_result
Fix `Portal.run_in_actor()` returns `None` result
2021-11-29 09:21:24 -05:00
Tyler Goodlet f6de7e0afd Factor out msg unwrapping into a func 2021-11-29 08:46:35 -05:00
Tyler Goodlet 0e7234aa68 Cache the return message instead of the value
Thanks to @richardsheridan for pointing out the limitations of using
*any* kind of value as the result-cached-flag and how it might cause
problems for anyone returning pickled blob-data. This changes the
`Portal` internal result value tracking to stash the full message from
which the value can be retrieved by any `Portal.result()` caller.
The internal change is that `Portal._return_once()` now returns a tuple
of the message *and* its value.
2021-11-29 07:44:44 -05:00
Tyler Goodlet 83da92d4cb Add nooz 2021-11-28 19:16:47 -05:00
Tyler Goodlet 57e98b25e7 Increase timeout, windows... 2021-11-20 13:08:19 -05:00
Tyler Goodlet 095c94b1d2 Fix `Portal.run_in_actor()` returns `None` bug
Fixes the issue where if the main remote task returns `None`,
`Portal.result()` would erroneously wait again on the underlying feeder
mem chan since `None` was being used as the cache flag. Instead set the
flag as the channel uid and consider the result collected when set to
anything else (since it would be odd to return that value from a remote
task when you already can read it as part of portal/channel apis).
2021-11-20 13:02:08 -05:00
Tyler Goodlet f32ccd76aa Add `Portal.result()` is None test case
This demonstrates a bug where if the remote `.run_in_actor()` task
returns `None` then multiple calls to `Portal.result()` will hang
forever...
2021-11-20 13:02:08 -05:00
goodboy b527fdbe1a
Merge pull request #263 from goodboy/early_deth_fixes
Early deth fixes
2021-11-08 21:23:09 -05:00
Tyler Goodlet 6b0366fe04 Guard against TCP server never started on cancel 2021-11-07 23:49:32 -05:00
Tyler Goodlet dbe5d96d66 Fix missing yield in lock acquirer 2021-11-07 23:48:05 -05:00
goodboy 08fa55a8c3
Merge pull request #260 from goodboy/clusters_and_hot_tips
Clusters and hot tips
2021-11-04 12:02:14 -04:00
Tyler Goodlet 546e1b2fa3 Drop unecessary partial 2021-11-04 10:41:25 -04:00
Tyler Goodlet 94a6fefede Add `open_actor_cluster()` eg. to readme 2021-11-02 15:42:19 -04:00
Tyler Goodlet 74f460eba7 Make auto generated child names <parent_name>.<name> 2021-11-02 15:40:15 -04:00
Tyler Goodlet 4cbb8641de Add an `open_actor_cluster()` usage example 2021-11-02 15:37:36 -04:00
Tyler Goodlet 7efb7da300 Start a hot tips for devs doc 2021-11-02 15:08:20 -04:00
goodboy 2c12d39617
Merge pull request #259 from goodboy/alpha3
Alpha3
2021-11-02 14:47:28 -04:00
Tyler Goodlet 6a063b3814 Bump release date by a day 2021-11-02 12:59:26 -04:00
Tyler Goodlet 9da1abeecd Super naive attempt to skip 3.10 on windows 2021-11-02 12:19:58 -04:00
Tyler Goodlet 3452e18e6d Toss 3.10 into CI 2021-11-01 14:12:42 -04:00
Tyler Goodlet 8fdc548676 Alpha3 version bump and release notes 2021-11-01 14:02:45 -04:00
goodboy 5dbe8e4b14
Merge pull request #241 from goodboy/trionics
Trionics
2021-10-27 13:09:11 -04:00
goodboy 9c13827a14
Merge pull request #256 from overclockworked64/241-news-fragment
Add a news fragment
2021-10-27 12:38:15 -04:00
overclockworked64 6da76949fd
Fix the syntax and point to the new package 2021-10-27 17:03:25 +02:00
overclockworked64 49dd230b4f
Add a newline 2021-10-25 20:01:21 +02:00
overclockworked64 c7f59bd483
Add a news fragment 2021-10-25 19:17:42 +02:00
Tyler Goodlet 083b73ad4a Test: don't grab debug lock if not in mode 2021-10-25 10:22:41 -04:00
goodboy 925af28092
Merge pull request #254 from goodboy/graceful_gather
Change to `gather_contexts()`, use event for graceful exit
2021-10-25 10:14:01 -04:00
Tyler Goodlet d0f5c7a5e2 Change to `gather_contexts()`, use event for graceful exit
The api we've made here is actually closer to `asyncio.gather()` but
with opening async context managers instead of funcs. Use another event
to allow for graceful teardown of children on non-cancellation exits
and add a doc string.
2021-10-24 14:00:01 -04:00
goodboy ebf080b8a2
Merge pull request #253 from overclockworked64/fix-type-annotation
Fix type annotations
2021-10-23 19:09:11 -04:00
overclockworked64 50400359b8
Fix type annotations 2021-10-24 00:47:26 +02:00
goodboy 71b8f9f1ea
Merge pull request #252 from goodboy/246_facepalm_backup
Trionics improvements from @overclockworked64
2021-10-23 18:10:17 -04:00
overclockworked64 b91adcf38d Get rid of external teardown trigger 2021-10-23 16:17:30 -04:00
overclockworked64 87e3d32992 Get rid of external teardown trigger because #245 resolves the problem 2021-10-23 16:17:30 -04:00
overclockworked64 04895b9d5e Get rid of dumb random uid and use current actor's uid 2021-10-23 16:17:30 -04:00
overclockworked64 b7a4641674 Allow specifying start_method and hard_kill 2021-10-23 16:17:30 -04:00
overclockworked64 c1089dbd95 Add a clustering test 2021-10-23 16:17:30 -04:00
overclockworked64 3130a04c61 Rename a variable and fix type annotations 2021-10-23 16:17:29 -04:00
overclockworked64 6f9229cd09 Cancel nursery 2021-10-23 16:17:29 -04:00
overclockworked64 6e6baf250b Make sure the ID is a str 2021-10-23 16:17:29 -04:00
overclockworked64 73cbb2388a Avoid RuntimeError by not using current_actor's uid 2021-10-23 16:17:29 -04:00
overclockworked64 2815f1c343 Make 'async_enter_all' take a teardown trigger which '_enter_and_wait' will wait on 2021-10-23 16:17:29 -04:00
overclockworked64 21afc69ac7 Postpone evaluation of annotations 2021-10-23 16:17:29 -04:00
overclockworked64 7d502cef74 Add 'open_actor_cluster' to __all__ 2021-10-23 16:17:29 -04:00
overclockworked64 76767a3d7e Add 'trio.trionics' to setup.py 2021-10-23 16:17:29 -04:00
Tyler Goodlet c372367cc2 Fix *args-like type annot 2021-10-23 15:54:40 -04:00
Tyler Goodlet 9ddd75733c Lul, fix everything for cluster helper 2021-10-23 15:54:40 -04:00
Tyler Goodlet 8ba10315c1 Fix type path to new `_supervise` mod 2021-10-23 15:54:40 -04:00
Tyler Goodlet 97006c904c Expose `Lagged` for broadcasting 2021-10-23 15:54:40 -04:00
Tyler Goodlet 79fb1d0ebc Fix top level nursery import 2021-10-23 15:54:40 -04:00
Tyler Goodlet 1e917fdb1d Add an async actor cluster spawner prototype 2021-10-23 15:54:40 -04:00
Tyler Goodlet 4114eb1d25 Move broadcast channel parts into trionics 2021-10-23 15:54:40 -04:00
Tyler Goodlet 680a841282 Start `trionics` sub-pkg with `async_enter_all()`
Since it seems we're building out more and more higher level primitives
in order to support certain parallel style actor trees and messaging
patterns (eg. task broadcast channels), we might as well start a new
sub-package for purely `trio` constructions. We hereby dub this
the realm of `trionics` (like electronics but for trios instead of
electrons).

To kick things off, add an `async_enter_all()` concurrent
exit-stack-like context manager API which will concurrently spawn
a sequence of provided async context managers and deliver their ordered
results but with proper support for `trio` cancellation semantics.
The stdlib's `AsyncExitStack` is not compatible with nurseries not
`trio` tasks (which are cancelled) since as task will be suspended on
the stack after push and does not ever hit a checkpoint until the stack
is closed.
2021-10-23 15:54:40 -04:00
Tyler Goodlet 340ddba4ae Rename the nursery module to `_supervise` 2021-10-23 15:54:40 -04:00
goodboy be5582aae3
Merge pull request #248 from overclockworked64/patch-ci
Drop 3.8 support
2021-10-23 15:53:55 -04:00
overclockworked64 43cb117bf7
Add a news fragment 2021-10-23 21:52:16 +02:00
goodboy 2cf56a5f8b
Merge pull request #250 from overclockworked64/patch-dev-deps
Add towncrier to dev deps
2021-10-23 15:35:41 -04:00
overclockworked64 39c8447dfb
Add towncrier to dev deps 2021-10-23 20:56:18 +02:00
overclockworked64 63ddf119fd
Drop 3.8 support 2021-10-23 18:18:36 +02:00
goodboy 828754dbb5
Merge pull request #245 from goodboy/immediate_remote_cancels
Immediate remote cancels
2021-10-17 08:16:50 -04:00
Tyler Goodlet b3c4851ffb Grab lock if cancelled during spawn before hard kill 2021-10-15 18:26:46 -04:00
Tyler Goodlet 5cfac58873 Don't pop a child entry that was never inserted 2021-10-15 18:16:58 -04:00
Tyler Goodlet 5d827f78e2 Fix pluggy readme link and typo 2021-10-15 11:42:57 -04:00
Tyler Goodlet 4f222a5f9c Use type match of expected error 2021-10-15 10:25:50 -04:00
Tyler Goodlet e4ed0fd2b3 Right, only worry about pdb lock when in debug mode 2021-10-15 09:29:25 -04:00
Tyler Goodlet a42ec1f571 Add nooz 2021-10-15 09:28:45 -04:00
Tyler Goodlet 533457c64d Handle nested multierror case on windows 2021-10-15 09:16:51 -04:00
Tyler Goodlet 51259c4809 Pass uid not actor object 2021-10-14 13:46:27 -04:00
Tyler Goodlet 7ee121aeaf Try to handle variable windows errors 2021-10-14 13:39:46 -04:00
Tyler Goodlet 9d83ef82b2 Remove union type for root getter 2021-10-14 13:39:46 -04:00
Tyler Goodlet fa317d1600 Change lock helper to take an actor uid tuple 2021-10-14 13:39:46 -04:00
Tyler Goodlet 6f5c35dd1b Fix missing task status type 2021-10-14 13:39:46 -04:00
Tyler Goodlet b14699d40b Adjust debugger tests to expect depth > 1 crashes
With the new fixes to the trio spawner we can expect that both root
*and* depth > 1 nursery owning actors will now not clobber any children
that are in debug (either via breakpoint or through crashing). The tests
changed now include more checks which ensure the 2nd level parent-ish
actors also bubble up through into `pdb` and don't kill any of their
(crashed) children before they're done themselves debugging.
2021-10-14 13:39:46 -04:00
Tyler Goodlet daa28ea0e9 Handle depth > 1 nursery owners which use debug mode 2021-10-14 13:39:46 -04:00
Tyler Goodlet 4b2710b8a5 Add tty lock acquire ctx mngr 2021-10-14 13:39:46 -04:00
Tyler Goodlet d30ce96740 Breakout `wait_for_parent_stdin_hijack()`, increase root pdb checker poll time 2021-10-14 13:39:46 -04:00
Tyler Goodlet f3a6ab62af Use debugger helper in nursery and spawn tasks 2021-10-14 13:39:46 -04:00
Tyler Goodlet 62035078ce Reduce some loglevels, stick in comment about blocking till next tick 2021-10-14 13:39:46 -04:00
Tyler Goodlet 893bad72d5 Add a maybe-open-debugger helper 2021-10-14 13:39:46 -04:00
Tyler Goodlet 77ec29008d Simplify to soft and hard reap sequences
This is actually surprisingly easy to grok having gone through a lot of
pain understanding edge cases in the zombie lord dev branch. Basically
we just need to make sure actors are managed in a 2 step reap sequence.
In the "soft" reap phase we wait for the process to terminate on its own
concurrently with (maybe) waiting for its portal's final result (if it's
a `.run_in_actor()`). If this path is cancelled or errors, then we do
a "hard" reap where we timeout and send a signal to the proc to
terminate immediately. The only last remaining trick is to tie in the
root-is-debugger-aware logic to yet again avoid tty clobbers.
2021-10-14 13:39:46 -04:00
Tyler Goodlet 2df16c1557 Lol, fix sub-actor case 2021-10-14 13:39:46 -04:00
Tyler Goodlet 46ff558556 Unwind process opening and shield hard reap 2021-10-14 13:39:46 -04:00
Tyler Goodlet bb9d9c74b1 Do immediate remote task cancels
As for `Actor.cancel()` requests, do the same for
`Actor._cancel_task()` but use `_invoke()` to ensure
correct msg transactions with caller. Don't cancel task
cancels on a cancel-all-tasks operation in attempt at
more determinism.
2021-10-14 13:39:46 -04:00
Tyler Goodlet 41f0992445 Don't whine about ; it ain't rpc 2021-10-14 13:39:46 -04:00
Tyler Goodlet 7643bbf183 Make actor runtime cancellation immediate 2021-10-14 13:39:46 -04:00
goodboy dfeebd6382
Merge pull request #243 from goodboy/less_logging
Less logging, add a `CANCEL` log level
2021-10-14 13:37:28 -04:00
Tyler Goodlet 6cda17436a Add nooz 2021-10-14 11:47:06 -04:00
Tyler Goodlet 1f0cc15675 Just set flag for use-after-closed service nursery calls 2021-10-06 17:02:13 -04:00
Tyler Goodlet 10f66e5141 De-noise warnings, add a 'cancel' log level
Now that we're on our way to a (somewhat) serious beta release I think
it's about time to start de-noising the logging emissions. Since we're
trying out this approach of "stack layer oriented" log levels, I figured
this is a good time to move most of the "warnings" to what they should
be: cancellation monitoring status messages. The level is set to 16
which is just above our "runtime" level but just below the traditional
"info" level. I think this will be a decent approach since usually if
you're confused about why your `tractor` app is behaving unlike you
expect, it's 90% of the time going to be to do with cancellation or
error propagation. This this setup a user can specify the 'cancel' level
and see all the msgs pertaining to both actor and task-in-actor
cancellation mechanics.
2021-10-06 17:02:13 -04:00
Tyler Goodlet 4d5a5c147a Move core actor runtime logging to, well, "runtime" 2021-10-06 17:02:13 -04:00
Tyler Goodlet d2f0843041 Make custom log levels report the right stack frame
The stdlib's `logging.LoggingAdapter` doesn't currently pass through
`stacklevel: int` down to its wrapped logger instance. Hack it here
and get our msgs looking like they would if using a built-in level.
2021-10-06 17:02:13 -04:00
Tyler Goodlet 3f6d4d6af4 Don't log.error if it was intentional 2021-10-06 17:02:13 -04:00
goodboy 21e60554cc
Merge pull request #214 from goodboy/optional_msgspec_support
Optional msgspec support
2021-10-06 17:01:44 -04:00
Tyler Goodlet b496e790fe Use from `.from_stream()` in TCP handler 2021-10-06 15:54:27 -04:00
Tyler Goodlet c6dc96b08c Add "message transport" structured sub-typing
In an effort to have some kind of more formal interface around the
transport layer, add a `MsgTransport` protocol type and use with
the channel composition of message streams. Start a little "key map"
of `(<codec>, <protocol>)` to `MsgTransport` types which can be
dynamically loaded. Add a `Channel.from_stream()` constructor thus
cleaning up the mangled logic that was in the constructor based on
inputs. Drop all the "auto reconnect" channel logic for now since
nothing is using it (internally) and it's likely it will need rework
once we bring in a protocol besides TCP.
2021-10-06 15:54:27 -04:00
Tyler Goodlet 135459ca25 Tolerate one decode error; may have been a registry ping 2021-10-05 13:37:17 -04:00
Tyler Goodlet ef75883b62 Add fragment 2021-10-05 13:37:17 -04:00
Tyler Goodlet f7fc464ce8 Add `msgspec` mentions to readme 2021-10-05 13:37:17 -04:00
Tyler Goodlet 07e8821cd5 Add a stream type factory 2021-10-05 13:37:17 -04:00
Tyler Goodlet 5b23a3bc35 Don't expect list value from registry 2021-10-05 13:37:17 -04:00
Tyler Goodlet 1382ad653d Ugh, appease mypy yet again 2021-10-05 13:37:17 -04:00
Tyler Goodlet 076f37c589 Attempt to gracefully handle channel breakage? 2021-10-05 13:37:17 -04:00
Tyler Goodlet 19d6885243 Ensure tuple for passed in arbiter addr 2021-10-05 13:37:17 -04:00
Tyler Goodlet dbc4e3dd46 Pin to latest and greatest `msgspec` 2021-10-05 13:37:17 -04:00
Tyler Goodlet 93726f1392 Call registry getter method in test 2021-10-05 13:37:17 -04:00
Tyler Goodlet 486e983964 Cast `defaultdict` to `dict` for registry get 2021-10-05 13:37:17 -04:00
Tyler Goodlet 1ab495a64d Map broken stream errs to transport closed; msgspec seems to be racy 2021-10-05 13:37:17 -04:00
Tyler Goodlet 562419c907 Convert actor UIDs to hashable tuples
`msgspec` sends python lists over the wire
(https://github.com/jcrist/msgspec/issues/30) which is fine and dandy
but we use them as lookup keys so we need to be sure we tuple-cast
first.
2021-10-05 13:37:17 -04:00
Tyler Goodlet 3facfb6d4c Fix log levels 2021-10-05 13:37:17 -04:00
Tyler Goodlet 3eb4c6dce1 Add msgspec installs, drop py3.7 2021-10-05 13:37:17 -04:00
Tyler Goodlet aa080543d0 Mypy fixes to enforce uid tuple 2021-10-05 13:37:17 -04:00
Tyler Goodlet 8375002b40 Fix py version classifier 2021-10-05 13:37:17 -04:00
Tyler Goodlet b64396f708 Pkg `msgpec` as optional dep, load transport type if importable 2021-10-05 13:37:17 -04:00
Tyler Goodlet 96b3f94c72 Accept transport closed error during handshake and msg loop 2021-10-05 13:37:17 -04:00
Tyler Goodlet ecd8c4bc7e Drop happy eyeballs inf delay 2021-10-05 13:37:17 -04:00
Tyler Goodlet 112117c1fc Add our own "transport closed" signal
This change some super old (and bad) code from the project's very early
days. For some redic reason i must have thought masking `trio`'s
internal stream / transport errors and a TCP EOF as `StopAsyncIteration`
somehow a good idea. The reality is you probably
want to know the difference between an unexpected transport error
and a simple EOF lol. This begins to resolve that by adding our own
special `TransportClosed` error to signal the "graceful" termination of
a channel's underlying transport. Oh, and this builds on the `msgspec`
integration which helped shed light on the core issues here B)
2021-10-05 13:37:17 -04:00
Tyler Goodlet 95e35f3d60 Add streaming decode support for `msgspec`
Add a `tractor._ipc.MsgspecStream` type which can be swapped in for
`msgspec` serialization transparently. A small msg-length-prefix framing
is implemented as part of the type and we use
`tricycle.BufferedReceieveStream` to handle buffering logic for the
underlying transport.

Notes:
- had to force cast a few more list  -> tuple spots due to no native
  `tuple`decode-by-default in `msgspec`: https://github.com/jcrist/msgspec/issues/30
- the framing can be understood by this protobuf walkthrough:
  https://eli.thegreenplace.net/2011/08/02/length-prefix-framing-for-protocol-buffers
- `tricycle` becomes a new dependency
2021-10-05 13:37:17 -04:00
Tyler Goodlet e39ee3a9cc Always cast arbiter addr to tuple 2021-10-05 13:37:17 -04:00
Tyler Goodlet 3771734311 Add `tricycle` and `msgspec` deps 2021-10-05 13:37:17 -04:00
Tyler Goodlet dda0b22870 Try out `msgspec` in our msgpack stream channel
Can only really use an encoder currently since there is no streaming api
in `msgspec` as of currently. See jcrist/msgspec#27.

Not sure if any encoding speedups are currently noticeable especially
without any validation going on yet XD.

First experiments toward #196
2021-10-05 13:37:17 -04:00
Tyler Goodlet 4079f02acf Cast to tuples for all uids explicitly 2021-10-05 13:37:17 -04:00
goodboy e6763d4daf
Merge pull request #239 from goodboy/fix_kbi_in_ctx_block
Fix kbi in ctx block
2021-10-05 13:35:48 -04:00
Tyler Goodlet 4f831abe25 Hipshot, try to avoid subs teardown race 2021-10-05 12:19:24 -04:00
Tyler Goodlet 8fd515c7b9 Add nooz 2021-10-04 12:28:55 -04:00
Tyler Goodlet b1235442fb Add longer timeout on windows 2021-10-04 12:10:39 -04:00
Tyler Goodlet d734dcede4 Accept a multierror on cancellation (windows?) 2021-10-04 11:43:50 -04:00
Tyler Goodlet 518a0d5e14 Add todo for log msg filename.. 2021-10-04 10:38:44 -04:00
Tyler Goodlet 8b416e6bba Stream and context api tweaks
- drop `shield` input to `MsgStream`
- check for cancel called prior to loading the feeder mem chan
  in `Context.open_stream()`
- warn on a timeout when trying to cancel a remote task from
  `Context.cancel()`
- drop noop endofchannel handler block
2021-10-04 10:38:44 -04:00
Tyler Goodlet bd31f47d5f Handle kbi in ctx blocks via `BaseException`
Fixes prior committed tests by more generally handling `BaseExcepion` in
context blocks. Left in the commented concrete list for reference.
2021-10-04 10:38:44 -04:00
Tyler Goodlet 8d79d83ac2 Ensure kbi will cancel context block
Follow up to previous commit: extend our simple context test set to
include cancellation via kbi in the parent as well as timeout logic and
testing of the parent opening a stream even though the target actor does
not.

Thanks again to https://github.com/adder46/wrath for discovering this
bug.
2021-10-04 10:38:26 -04:00
Tyler Goodlet c1727ce05e Add a test of both stream styles side-by-side
Not sure we even have a test for this yet. The main issue discovered by
a user project (https://github.com/adder46/wrath) was that a kbi raised
inside a block like this (with both recv-only and send-recv streams)
would not cancel on the first ctrl-c sent from console and instead
SIGiNT had to be repeatedly sent as many times as there are subactors in
the first level tree. This test catches that as well as just verifies
the basic side-by-side functionality.
2021-10-04 10:38:22 -04:00
goodboy a868196d13
Merge pull request #166 from goodboy/use_trio_on_win
Flip to using the `trio` spawner on windows
2021-09-18 16:04:12 -04:00
Tyler Goodlet 088597ba50 Bump to `.alpha3.dev0` 2021-09-18 15:33:11 -04:00
Tyler Goodlet 91f222983c Add nooz 2021-09-18 15:26:45 -04:00
Tyler Goodlet 4259738864 Flip to using the `trio` spawner on windows
Was able to try it manually on a windows 10 system and the debugger
works great!
2021-09-18 14:10:32 -04:00
goodboy 2d0e35b316
Merge pull request #236 from goodboy/alpha2
`.alpha2` release
2021-09-08 09:46:53 -04:00
Tyler Goodlet 95f2f10b64 Update news file 2021-09-08 09:17:24 -04:00
Tyler Goodlet 2b2c73905c Bump setup version to .alpha2 2021-09-07 21:49:18 -04:00
Tyler Goodlet 02307d2656 Pump broadcasting support in readme 2021-09-05 15:22:16 -04:00
goodboy 3f1bc37143
Merge pull request #229 from goodboy/live_on_air_from_tokio
`tokio`-style broadcast channels
2021-09-03 07:29:29 -04:00
Tyler Goodlet 1137a9e7ac Fix 404ed tokio urls 2021-09-02 21:12:54 -04:00
Tyler Goodlet bcf5b9fd18 Add news fragment 2021-09-02 21:12:54 -04:00
Tyler Goodlet 2745a2b1dc Solve first-recv-cancelled by recursive `.receive()` on wake 2021-09-02 21:12:54 -04:00
Tyler Goodlet 5881a82d2a Add a first receiver is cancelled test 2021-09-02 21:12:54 -04:00
Tyler Goodlet b7b489dd07 Drop shielded stream api usage 2021-09-02 21:12:54 -04:00
Tyler Goodlet d9bb52fe7b Store array `maxlen` in state singleton
The `collections.deque` takes care of array length truncation of values
for us implicitly but in the future we'll likely want this value exposed
to alternate array implementations. This patch is to provide for that as
well as make `mypy` happy since the `dequeu.maxlen` can also be `None`.
2021-09-02 21:12:54 -04:00
Tyler Goodlet 9258f79510 Don't wake sibling bcast consumers on a cancelled call 2021-09-02 21:12:54 -04:00
Tyler Goodlet 5c6355062c Shorten sequence length for test speedup 2021-09-02 21:12:54 -04:00
Tyler Goodlet 44ef26bb18 Shorten default feeder mem chan size to 64 2021-09-02 21:12:54 -04:00
Tyler Goodlet d9e793d4ba Can't use built-in generics till 3.9... 2021-09-02 21:12:54 -04:00
Tyler Goodlet 7857a9ac6d Add `shield: bool` kwarg to `Portal.open_stream_from()` 2021-09-02 21:12:54 -04:00
Tyler Goodlet 5182ee7782 Add a "faster task is cancelled" test 2021-09-02 21:12:54 -04:00
Tyler Goodlet 39cf9af9fc Rename test module 2021-09-02 21:12:54 -04:00
Tyler Goodlet 63ec740e27 Add some bcaster ref sanity asserts around subscriptions 2021-09-02 21:12:54 -04:00
Tyler Goodlet 0d70e3081a Add laggy parent stream tests
Add a couple more tests to check that a parent and sub-task stream can
be lagged and recovered (depending on who's slower). Factor some of the
test machinery into a new ctx mngr to make it all happen.
2021-09-02 21:12:54 -04:00
Tyler Goodlet 093e7d921c Instance ids are ints 2021-09-02 21:12:54 -04:00
Tyler Goodlet d7ad8982ff Add subscribe after close test 2021-09-02 21:12:54 -04:00
Tyler Goodlet bec3f5999d Drop uuid4 keys, raise closed error on subscription after close 2021-09-02 21:12:54 -04:00
Tyler Goodlet 2bad2bac50 Don't enable debug mode..it borks CI 2021-09-02 21:12:54 -04:00
Tyler Goodlet a4cb0ef21f Fix `.receive()` re-assignment, drop `.clone()` 2021-09-02 21:12:54 -04:00
Tyler Goodlet 236ed0b0dd Initial broadcaster tests including one to test our `MsgStream.subscribe()` api 2021-09-02 21:12:54 -04:00
Tyler Goodlet 346b5d2eda Blade runner it
Get rid of all the (requirements for) clones of the underlying
receivable. We can just use a uuid generated key for each instance
(thinking now this can probably just be `id(self)`). I'm fully convinced
now that channel cloning is only a source of confusion and anti-patterns
when we already have nurseries to define resource lifetimes. There is no
benefit in particular when you allocate subscriptions using a context
manager (not sure why `trio.open_memory_channel()` doesn't enforce
this).

Further refinements:
- add a `._closed` state that will error the receiver on reuse
- drop module script section;  it's been moved to a real test
- call the "receiver" duck-type stub a new name
2021-09-02 21:12:54 -04:00
Tyler Goodlet 6c17c7367a Store handle to underlying channel's `.receive()`
This allows for wrapping an existing stream by re-assigning its receive
method to the allocated broadcaster's `.receive()` so as to avoid
expecting any original consumer(s) of the stream to now know about the
broadcaster; this instead mutates the stream to delegate to the new
receive call behind the scenes any time `.subscribe()` is called.

Add a `typing.Protocol` for so called "cloneable channels" until we
decide/figure out a better keying system for each subscription and
mask all undesired typing failures.
2021-09-02 21:12:54 -04:00
Tyler Goodlet 2d1c24112b Add subscription support to message streams
Add `ReceiveMsgStream.subscribe()` which allows allocating a broadcast
receiver around the stream for use by multiple actor-local consumer
tasks. Entering this context manager idempotently mutates the stream's
receive machinery which for now can not be undone. Move `.clone()` to
the receive stream type.

Resolves #204
2021-09-02 21:12:54 -04:00
Tyler Goodlet a12b1fc631 Drop optimization check, binance made its point 2021-09-02 21:12:54 -04:00
Tyler Goodlet ceed96aa3f Add common state delegate type for all consumers
For every set of broadcast receivers which pull from the same producer,
we need a singleton state for all of,
- subscriptions
- the sender ready event
- the queue

Add a `BroadcastState` dataclass for this and pass it to all
subscriptions. This makes the design much more like the built-in memory
channels which do something very similar with `MemoryChannelState`.

Use a `filter()` on the subs list in the sequence update step, plus some
other commented approaches we can try for speed.
2021-09-02 21:12:54 -04:00
Tyler Goodlet 6e78bcf898 Facepalm: use single `_subs` per clone set 2021-09-02 21:12:54 -04:00
Tyler Goodlet 4ad75a3287 Obviously keying on tasks isn't going to work
Using the current task as a subscription key fails horribly as soon as
you hand off new subscription receiver to another task you've spawned..

Instead use the underlying ``trio.abc.ReceiveChannel.clone()`` as a key
(so i guess we're assuming cloning is supported by the underlying?)
which makes this all work just like default mem chans. As a bonus, now
we can just close the underlying rx (which may be a clone) on
`.aclose()` and everything should just work in terms of the underlying
channels lifetime (i think?).

Change `.subscribe()` to be async since the receive channel type
interface only expects `.aclose()` and it actually ends up being
nicer for 3.9+ style `async with` parentheses style anyway.
2021-09-02 21:12:54 -04:00
Tyler Goodlet 64358f6525 Rename to broadcast mod, don't expect mem chan specifically 2021-09-02 21:12:54 -04:00
Tyler Goodlet 1af7dbb732 `Task` is hashable, so key on it 2021-09-02 21:12:54 -04:00
Tyler Goodlet 6a2c3da1bb Simplify api around receive channel
Buncha improvements:
- pass in the queue via constructor
- tracking over all underlying memory channel closure using cloning
- do it like `tokio` and set lagged consumers to the last sequence
  before raising
- copy the subs on first receiver wakeup for iteration instead of
  iterating the table directly (and being forced to skip the current
  tasks sequence increment)
- implement `.aclose()` to close the underlying clone for this task
- make `broadcast_receiver()` just take the recv chan since it doesn't
  need anything on the send side.
2021-09-02 21:12:54 -04:00
Tyler Goodlet 3817b4fb5e Ultra naive broadcast channel prototype 2021-09-02 21:12:54 -04:00
goodboy 87ce6c8eb3
Merge pull request #234 from goodboy/root_tty_hangs
Root tty hangs
2021-09-02 16:58:25 -04:00
Tyler Goodlet 76342ed0c5 Add news bit 2021-09-02 16:50:32 -04:00
Tyler Goodlet 3208b67f57 Drop shielding on root lock acquire; seems to prevent hangs 2021-09-02 16:23:38 -04:00
Tyler Goodlet 61d2307e52 Unlock pdb tty on all possible net faults 2021-09-02 16:23:38 -04:00
Tyler Goodlet 79f0d6fda0 Attempt to avoid pdb lockups on channel breakage
Always try to release the root tty lock on broken connection errors.
2021-09-02 16:23:10 -04:00
Tyler Goodlet 4f166500d0 Add return type to debugger factory 2021-09-02 16:22:59 -04:00
Tyler Goodlet d906c81f14 Export portal type at top level 2021-09-02 16:22:59 -04:00
Tyler Goodlet 68d56d5df0 Try not masking SIGINT in child processes 2021-09-02 16:22:59 -04:00
Tyler Goodlet 497fa72c96 Add a SIGINT handler that kills the process tree
We're not actually using this but it's for reference if we do end up
needing it.

The std lib's `pdb` internals override SIGINT handling whenever one
enters the debugger repl. Force a handler that kills the tree if SIGINT
is triggered from the root actor, otherwise ignore it since supervised
children should be managed already. This resolves an issue with guest
mode where `pdb` causes SIGINTs to be swallowed resulting in the host
loop never terminating the process tree.
2021-09-02 16:22:02 -04:00
goodboy e5845b5d36
Merge pull request #230 from goodboy/drop_stream_shielding
Drop stream shielding; it was from a legacy api design
2021-09-02 16:18:42 -04:00
goodboy 7e98afa685
Merge pull request #233 from goodboy/drop_py37
Drop py37
2021-09-02 15:00:12 -04:00
Tyler Goodlet 22a79219a1 Lol, guess windows needs the extra minutes 2021-09-02 08:35:31 -04:00
Tyler Goodlet 3919c9739e Make fragment a `.rst` 2021-09-02 08:35:25 -04:00
Tyler Goodlet 558c44fdbe Add newsfragment 2021-09-02 08:33:29 -04:00
Tyler Goodlet b4d95e9543 Update docs to new close semantics 2021-09-02 08:24:18 -04:00
Tyler Goodlet af85d35685 Drop stream shielding; it was from a legacy design
The whole origin was not having an explicit open/close semantic for
streams. We have that now so this internal mechanic isn't needed and
further our streams become more correct by having `.aclose()` be
independent of cancellation.
2021-09-02 08:24:18 -04:00
Tyler Goodlet b176628206 Drop 3.7 support from install script
Resolves #232
2021-09-02 07:51:33 -04:00
Tyler Goodlet 47a469484d Drop py3.7 from CI; cut run to 5mins 2021-09-02 07:51:33 -04:00
goodboy 07e43f88bf
Merge pull request #231 from goodboy/add_the_crier
Use `towncrier`
2021-09-02 07:50:15 -04:00
Tyler Goodlet a221949e8f Add small howto instructions 2021-09-02 07:40:43 -04:00
Tyler Goodlet fc76e97a45 Initial `towncrier` integration for releases
Add a small config with a manually specified version key for now.
Fix up some changelog contents from last release and bump our `setup.py`
version to an `.alpha2.dev0`.

Resolves #227
2021-09-01 17:04:12 -04:00
goodboy a105e32e34
Merge pull request #226 from goodboy/debugger_test_tweaks
Debugger tests determinism
2021-08-03 08:59:02 -04:00
Tyler Goodlet ace1b1312c Terminate async gen example caller to avoid (benign) errors in console output 2021-08-02 21:49:15 -04:00
Tyler Goodlet 7431e8ea01 Don't log cancelled inceptions seen by the root 2021-08-02 21:15:42 -04:00
Tyler Goodlet 82999801a6 Drop leftover noisy exception logging.. 2021-08-02 16:56:00 -04:00
Tyler Goodlet c5c7e694ec Better early timeout handling, continue on child re-lock 2021-08-01 13:10:51 -04:00
goodboy b01f594025
Merge pull request #225 from goodboy/fix_news_links
Facepalm: fix rst hyperlinks
2021-08-01 11:52:39 -04:00
Tyler Goodlet a84a27c6d3 Facepalm: fix rst hyperlinks 2021-08-01 11:29:41 -04:00
goodboy 9cfec2d3b5
Merge pull request #224 from goodboy/wats_da_nooz
Add .alpha1 news flash
2021-08-01 11:24:12 -04:00
Tyler Goodlet 8a4a11b885 Add .alpha1 news flash 2021-08-01 10:58:41 -04:00
goodboy 14379a0f46
Merge pull request #220 from goodboy/ctx_debugger
Ctx debugger
2021-08-01 10:56:57 -04:00
Tyler Goodlet 674fbbc6b3 Docs and comments tidying 2021-08-01 10:44:13 -04:00
Tyler Goodlet f173012fea Handle repeat child tty-acquires race 2021-07-31 15:01:26 -04:00
Tyler Goodlet 6006adc0de Hide `_invoke()` tb, move actor error to exceptions mod 2021-07-31 13:56:26 -04:00
Tyler Goodlet 0afa7f0f8e Fix lock context manager return type 2021-07-31 12:50:58 -04:00
Tyler Goodlet b3d28a1ee4 Drop debugger path and duplicate func from rebasing 2021-07-31 12:46:40 -04:00
Tyler Goodlet 13b76c9439 Add fast fail test using the context api 2021-07-31 12:46:40 -04:00
Tyler Goodlet 632c666a7d Adjust debug tests to accomodate no more root clobbering
We may get multiple re-entries to debugger by `bp_forever` sub-actor
now since the root will incrementally try to cancel it only when the tty
lock is not held.
2021-07-31 12:46:40 -04:00
Tyler Goodlet 09f00a5a00 Go back to only logging tbs on no debugger 2021-07-31 12:46:40 -04:00
Tyler Goodlet 44bfacc0c2 Comment hard-kill-sidestep for now since nursery version covers it? 2021-07-31 12:46:40 -04:00
Tyler Goodlet 551816e80d Solve the root-cancels-child-in-tty-lock race
Finally this makes a cancelled root actor nursery not clobber child
tasks which request and lock the root's tty for the debugger repl.

Using an edge triggered event which is set after all fifo-lock-queued
tasks are complete, we can be sure that no lingering child tasks are
going to get interrupted during pdb use and tty lock acquisition.
Further, even if new tasks do queue up to get the lock, the root will
incrementally send cancel msgs to each sub-actor only once the tty is
not locked by a (set of) child request task(s). Add shielding around all
the critical sections where the child attempts to allocate the lock from
the root such that it won't be disrupted from cancel messages from the
root after the acquire lock transaction has started.
2021-07-31 12:46:40 -04:00
Tyler Goodlet be1fcb2a5b Distinguish between a local pdb unlock and the tty unlock in root 2021-07-31 12:46:40 -04:00
Tyler Goodlet ef89ed947a Fix hard kill in debug mode; only do it when debug lock is empty 2021-07-31 12:46:40 -04:00
Tyler Goodlet 5b3894827f Move some infos to runtime level 2021-07-31 12:46:40 -04:00
Tyler Goodlet 0fdcfa0ba1 Move debugger wait inside OCA nursery 2021-07-31 12:46:40 -04:00
Tyler Goodlet 37a1897c47 Don't shield debugger status wait; it causes hangs 2021-07-31 12:46:40 -04:00
Tyler Goodlet 0f2a39a311 Catch and delay errors in the root if debugger is active 2021-07-31 12:46:40 -04:00
Tyler Goodlet 23a1622256 Don't kill root's immediate children when in debug
If the root calls `trio.Process.kill()` on immediate child proc teardown
when the child is using pdb, we can get stdstreams clobbering that
results in a pdb++ repl where the user can't see what's been typed. Not
killing such children on cancellation / error seems to resolve this
issue whilst still giving reliable termination. For now, code that
special path until a time it becomes a problem for ensuring zombie
reaps.
2021-07-31 12:46:40 -04:00
Tyler Goodlet 63bdddd0c9 Add debug example that causes pdb stdin clobbering 2021-07-31 12:46:40 -04:00
Tyler Goodlet 49d439b681 Add some brief todo notes on idea of shielded breakpoint 2021-07-31 12:46:40 -04:00
Tyler Goodlet 6f05f5d5e6 Wait for debugger lock task context termination 2021-07-31 12:46:40 -04:00
Tyler Goodlet b369b91357 Fix up var naming and typing 2021-07-31 12:46:40 -04:00
Tyler Goodlet 969bce3aa4 Use context for remote debugger locking
A context is the natural fit (vs. a receive stream) for locking the root
proc's tty usage via it's `.started()` sync point. Simplify the
`_breakpoin()` routine to be a simple async func instead of all this
"returning a coroutine" stuff from before we decided that
`tractor.breakpoint()` must be async. Use `runtime` level for locking
logging making it easier to trace.
2021-07-31 12:46:40 -04:00
goodboy 54d8c93f1b
Merge pull request #219 from goodboy/bi_streaming_no_debugger_stuff
Initial bi-directional streaming support!
2021-07-31 12:27:53 -04:00
Tyler Goodlet 240f591234 Add 2-way streaming example to readme and scripts 2021-07-31 12:10:25 -04:00
Tyler Goodlet 69bbf6a957 Install test deps and py3.9 for type check job 2021-07-08 13:53:28 -04:00
Tyler Goodlet 443ebea165 Use "pdb" level logging in debug mode 2021-07-08 13:02:33 -04:00
Tyler Goodlet 25779d48a8 Define explicit adapter level methods for mypy 2021-07-08 12:51:35 -04:00
Tyler Goodlet fde52d2464 Mypy fixes 2021-07-08 12:48:34 -04:00
Tyler Goodlet 8c927d708d Change trace to transport level 2021-07-07 14:31:15 -04:00
Tyler Goodlet 31590e82a3 Flip "trace" level to "transport" level logging 2021-07-07 14:31:03 -04:00
Tyler Goodlet 929b6dcc83 Skip debugger tests on windows at module level 2021-07-06 13:26:30 -04:00
Tyler Goodlet 2513c652c1 Go back to only logging crashes if no pdb gets engaged 2021-07-06 08:23:30 -04:00
Tyler Goodlet 9ddb654783 Avoid mutate on iterate race 2021-07-06 08:23:30 -04:00
Tyler Goodlet 7f86d63e77 Drop trip kwarg 2021-07-06 08:23:30 -04:00
Tyler Goodlet 12f987514d Don't enter debug on closed resource errors 2021-07-06 08:23:30 -04:00
Tyler Goodlet 98bbf8e0df Move join event trigger to direct exit path 2021-07-06 08:23:30 -04:00
Tyler Goodlet b1cd7fdedf Don't shield on root cancel it can causes hangs 2021-07-06 08:23:30 -04:00
Tyler Goodlet ef725c5972 Always hard kill sub-procs on teardown
Adds a new hard kill routine for the `trio` spawning backend.
2021-07-06 08:23:30 -04:00
Tyler Goodlet a134bc490f Avoid mutate during interate error 2021-07-06 08:23:30 -04:00
Tyler Goodlet 0623de0b47 Expect context cancelled when we cancel 2021-07-06 08:23:30 -04:00
Tyler Goodlet b21e2a6caa Add pre-stream open error conditions 2021-07-06 08:23:30 -04:00
Tyler Goodlet c6cdaf9c31 De-densify some code 2021-07-06 08:23:30 -04:00
Tyler Goodlet 91640facbc Always shield cancel the caller on cancel-causing-errors, add teardown logging 2021-07-06 08:23:30 -04:00
Tyler Goodlet c2484e88a1 First try: pack cancelled tracebacks and ship to caller 2021-07-06 08:23:30 -04:00
Tyler Goodlet 3423ea4011 Add temp warning msg for context cancel call 2021-07-06 08:23:29 -04:00
Tyler Goodlet af701c16ee Consider relaying context error via raised-in-scope-nursery task 2021-07-06 08:23:29 -04:00
Tyler Goodlet 1703171bea Set stream "end of channel" after shielded check!
Another face palm that was causing serious issues for code that is using
the `.shielded` feature..

Add a bunch more detailed comments for all this subtlety and hopefully
get it right once and for all. Also aggregated the `trio` errors that
should trigger closure inside `.aclose()`, hopefully that's right too.
2021-07-06 08:23:29 -04:00
Tyler Goodlet 3d633408fc Don't clobber msg loop mem chan on rx stream close
Revert this change since it really is poking at internals and doesn't
make a lot of sense. If the context is going to be cancelled then the
msg loop will tear down the feed memory channel when ready, we don't
need to be clobbering it and confusing the runtime machinery lol.
2021-07-06 08:23:29 -04:00
Tyler Goodlet 8eb889a745 Modernize streaming tests 2021-07-06 08:23:29 -04:00
Tyler Goodlet 349d82d182 Speedup the dynamic pubsub test 2021-07-06 08:23:29 -04:00
Tyler Goodlet 7c5fd8ce9f Add detailed ``@tractor.context`` cancellation/termination tests 2021-07-06 08:23:29 -04:00
Tyler Goodlet 196dea80db Drop trailing comma 2021-07-06 08:23:29 -04:00
Tyler Goodlet 54916be601 Adjustments for non-frozen context dataclass change 2021-07-06 08:23:29 -04:00
Tyler Goodlet 1a69727b75 Fix exception typing 2021-07-06 08:23:29 -04:00
Tyler Goodlet 348148ff1e Explicitly formalize context/streaming teardown
Add clear teardown semantics for `Context` such that the remote side
cancellation propagation happens only on error or if client code
explicitly requests it (either by exit flag to `Portal.open_context()`
or by manually calling `Context.cancel()`).  Add `Context.result()`
to wait on and capture the final result from a remote context function;
any lingering msg sequence will be consumed/discarded.

Changes in order to make this possible:
- pass the runtime msg loop's feeder receive channel in to the context
  on the calling (portal opening) side such that a final 'return' msg
  can be waited upon using `Context.result()` which delivers the final
  return value from the callee side `@tractor.context` async function.
- always await a final result from the target context function in
  `Portal.open_context()`'s `__aexit__()` if the context has not
  been (requested to be) cancelled by client code on block exit.
- add an internal `Context._cancel_called` for context "cancel
  requested" tracking (much like `trio`'s cancel scope).
- allow flagging a stream as terminated using an internal
  `._eoc` flag which will mark the stream as stopped for iteration.
- drop `StopAsyncIteration` catching in `.receive()`; it does
  nothing.
2021-07-06 08:23:29 -04:00
Tyler Goodlet 73302d9d16 Specially raise a `ContextCancelled` for a task-context rpc 2021-07-06 08:23:29 -04:00
Tyler Goodlet 409f7f0d5a Expose streaming components at top level 2021-07-06 08:23:29 -04:00
Tyler Goodlet eb3662f981 Add a specially handled `ContextCancelled` error 2021-07-06 08:23:29 -04:00
Tyler Goodlet 3999849b03 Add a multi-task streaming test 2021-07-06 08:23:29 -04:00
Tyler Goodlet 39b9896a62 Only close recv chan if we get a ref 2021-07-06 08:23:29 -04:00
Tyler Goodlet 5b8b7d374a Add error case 2021-07-06 08:23:29 -04:00
Tyler Goodlet 9a4244b9a6 Support no arg to `Context.started()` like trio 2021-07-06 08:23:29 -04:00
Tyler Goodlet a2e2f7e7a8 Only send stop msg if not received from far end 2021-07-06 08:23:29 -04:00
Tyler Goodlet 6559fb72aa Expose msg stream types at top level 2021-07-06 08:23:29 -04:00
Tyler Goodlet e5bc07f355 Add dynamic pubsub test using new bidir stream apis 2021-07-06 08:23:29 -04:00
Tyler Goodlet e311430d25 Be more pedantic with error handling 2021-07-06 08:23:29 -04:00
Tyler Goodlet 08eb6bd019 Fix typing 2021-07-06 08:23:29 -04:00
Tyler Goodlet 98133a984e Parametrize with async for style tests 2021-07-06 08:23:29 -04:00
Tyler Goodlet 1f8966ba64 Support passing `shield` at stream contruction 2021-07-06 08:23:29 -04:00
Tyler Goodlet 4240efc7e3 Add basic test set 2021-07-06 08:23:29 -04:00
Tyler Goodlet 4846c6d498 Cancel scope on stream consumer completion 2021-07-06 08:23:29 -04:00
Tyler Goodlet 14114547e8 Expose `@context` decorator at top level 2021-07-06 08:23:29 -04:00
Tyler Goodlet e3955bb62b Add initial bi-directional streaming
This mostly adds the api described in
https://github.com/goodboy/tractor/issues/53#issuecomment-806258798

The first draft summary:
- formalize bidir steaming using the `trio.Channel` style interface
  which we derive as a `MsgStream` type.
- add `Portal.open_context()` which provides a `trio.Nursery.start()`
  remote task invocation style for setting up and tearing down tasks
  contexts in remote actors.
- add a distinct `'started'` message to the ipc protocol to facilitate
  `Context.start()` with a first return value.
- for our `ReceiveMsgStream` type, don't cancel the remote task in
  `.aclose()`; this is now done explicitly by the surrounding `Context`
   usage: `Context.cancel()`.
- streams in either direction still use a `'yield'` message keeping the
  proto mostly symmetric without having to worry about which side is the
  caller / portal opener.
- subtlety: only allow sending a `'stop'` message during a 2-way
  streaming context from `ReceiveStream.aclose()`, detailed comment
  with explanation is included.

Relates to #53
2021-07-06 08:23:29 -04:00
goodboy 4d530deac3
Merge pull request #215 from goodboy/transport_cleaning
Transport cleaning: attempt to define our graceful channel close signal.
2021-07-06 08:20:19 -04:00
Tyler Goodlet 55760b3fe0 Only expect further message in non-name-error first case 2021-07-04 12:55:36 -04:00
Tyler Goodlet 6aab16f877 Drop added logging around root cancel 2021-07-04 11:00:08 -04:00
Tyler Goodlet caa70245e0 Try remapping all broken errs wholesale on windows 2021-07-04 10:47:15 -04:00
Tyler Goodlet 9c9309faf8 Handle race for tty by child actors 2021-07-04 10:25:41 -04:00
Tyler Goodlet 3f75732b02 Remap windows specific connection reset error 2021-07-04 10:25:19 -04:00
Tyler Goodlet 1edf5c2f06 Specially remap TCP 104-connection-reset to `TransportClosed`
Since we currently have no real "discovery protocol" between process
trees, the current naive approach is to check via a connect and drop to
see if a TCP server is bound to a particular address during root actor
startup. This was a historical decision and had no real grounding beyond
taking a simple approach to get something working when the project
was first started.

This  is obviously problematic from an error handling perspective since
we need to be able to avoid such quick connect-and-drops from cancelling
an "arbiter"'s (registry actor's) channel-msg loop machinery (which
would propagate and cancel the actor).

For now we map this particular TCP error, which gets remapped by `trio`
as a `trio.BrokenResourceError` to our own internal `TransportClosed`
which is swallowed by channel message loop processing and indicates
a graceful teardown of the far end actor.
2021-07-03 18:57:54 -04:00
Tyler Goodlet a2d400583f Fix tuple type 2021-07-02 18:10:06 -04:00
Tyler Goodlet 85246d2df3 Benign deps reorg 2021-07-02 11:56:14 -04:00
Tyler Goodlet b372f4c92b Handle top level multierror that presents now? 2021-07-02 11:55:16 -04:00
Tyler Goodlet 32b4ae0603 Accept transport closed error during handshake and msg loop 2021-07-02 11:38:24 -04:00
Tyler Goodlet 80e100f818 Add our own "transport closed" signal
This change some super old (and bad) code from the project's very early
days. For some redic reason i must have thought masking `trio`'s
internal stream / transport errors and a TCP EOF as `StopAsyncIteration`
somehow a good idea. The reality is you probably
want to know the difference between an unexpected transport error
and a simple EOF lol. This begins to resolve that by adding our own
special `TransportClosed` error to signal the "graceful" termination of
a channel's underlying transport. Oh, and this builds on the `msgspec`
integration which helped shed light on the core issues here B)
2021-07-02 11:36:22 -04:00
Tyler Goodlet 44d7988204 New docs theme hotfix 2021-06-14 08:10:59 -04:00
goodboy e98302212a
Merge pull request #211 from goodboy/new_docs_polish
New docs theme, readme actors rant.
2021-06-14 07:33:02 -04:00
Tyler Goodlet 0301d105dd Better rant flow as per suggestions 2021-06-14 06:41:10 -04:00
Tyler Goodlet 4ee7db338d Fill out the rant a bit more 2021-06-14 06:31:16 -04:00
Tyler Goodlet 558d546c8f Bump readme for #210 2021-06-14 06:31:16 -04:00
Tyler Goodlet 5528a4eb45 Attempt configuring sphinx-book-theme 2021-06-14 06:31:16 -04:00
Tyler Goodlet fb23a9d8d4 Try out sphinx-book-theme 2021-06-14 06:31:16 -04:00
goodboy f48548ab94
Merge pull request #197 from goodboy/drop_run
Drop run
2021-05-07 12:02:23 -04:00
Tyler Goodlet 73e123bac7 Fix line length 2021-05-07 11:21:40 -04:00
Tyler Goodlet 2b4cf59ee1 Drop sleep 2021-05-07 11:21:40 -04:00
Tyler Goodlet 0551756e22 Use trio.run() in windows tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet 5ca963148e Disable leftover debug mode 2021-05-07 11:21:40 -04:00
Tyler Goodlet 4798d3b5db Drop lingering rpc_module_paths refs 2021-05-07 11:21:40 -04:00
Tyler Goodlet 247483ee93 Drop run and rpc_module_paths from streaming tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet 9e64161538 Drop run and rpc_module_paths from rpc tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet 3bddf9a94b Drop run and rpc_module_paths from spawning tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet 1eedd463cb Drop run and rpc_module_paths from pubsub tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet b46e60ab9d Drop run from multi prog tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet cb527c2562 Mostly drop run from local tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet 1584c547cd Drop run and rpc_module_paths from discovery tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet 2efd8ed167 Drop run and rpc_module_paths from cancel tests 2021-05-07 11:21:40 -04:00
Tyler Goodlet a54260a67e Drop tractor.run() from docs 2021-05-07 11:21:40 -04:00
Tyler Goodlet 98a0594c26 Drop `tractor.run()` from all examples 2021-05-07 11:21:40 -04:00
goodboy ffd10e193e
Merge pull request #208 from goodboy/mp_teardown_hardening
Mp teardown hardening
2021-05-06 19:59:50 -04:00
Tyler Goodlet 87971de1d9 Re-raise any sidestepped `trio.Cancelled` 2021-05-06 12:05:17 -04:00
Tyler Goodlet 9f38406e85 Appease mypy 2021-05-06 12:05:17 -04:00
Tyler Goodlet c4b42000eb Shield around root actor cancel 2021-05-06 12:05:17 -04:00
Tyler Goodlet 607c48f1ac Distinctly separate and harden mp spawning
It's clear now that special attention is needed to handle the case where
a spawned `multiprocessing` proc is started but then the parent is
cancelled before the child can connect back; in this case we need to be
sure to kill the near-zombie child asap. This may end up being the
solution to other resiliency issues seen around mp with nested process
trees too. More testing is needed to be sure.

Relates to #84 #89 #134 #146
2021-05-06 12:05:17 -04:00
goodboy af93b8532a
Merge pull request #206 from goodboy/stream_contexts
Explicit stream contexts
2021-05-04 10:41:03 -04:00
Tyler Goodlet fc36e73628 Comment out `MsgStream` for now 2021-04-28 16:40:38 -04:00
Tyler Goodlet b1f657e246 De-compact async with tuple on 3.8-
Turns out can't use the nicer syntax before python 3.9 (even though it
doesn't seem documented anywhere?).

Relates to #207
2021-04-28 16:35:15 -04:00
Tyler Goodlet 2498a4963b Update all tests to new streaming API 2021-04-28 12:23:14 -04:00
Tyler Goodlet 5a5e6baad1 Update all examples to new streaming API 2021-04-28 12:23:08 -04:00
Tyler Goodlet f59346d854 Add func type checking to `.run_in_actor()` 2021-04-28 12:23:08 -04:00
Tyler Goodlet 86fc418050 Error on bad registry pops 2021-04-28 12:23:08 -04:00
Tyler Goodlet 83af295b45 Fix func type checking 2021-04-28 12:23:08 -04:00
Tyler Goodlet ad9256bcdb Drop stream exhaustion; no longer needed 2021-04-28 12:23:08 -04:00
Tyler Goodlet 3e19fd311b Move debugger locking to new stream api 2021-04-28 12:23:08 -04:00
Tyler Goodlet 80c96cab01 Add a warning for soon to be deprecated `ctx` use in `@stream` func 2021-04-28 12:23:08 -04:00
Tyler Goodlet 36251357b3 Add a new one-way stream API
NB: this is a breaking change removing support for `Portal.run()` being
able to invoke remote streaming functions and instead replacing the
method call with an async context manager api `Portal.open_stream_from()`
This style explicitly defines stream teardown at the call site instead
of expecting the user to handle tricky things correctly themselves: eg.
`async_geneartor.aclosing()`. Going forward `Portal.run()` can be used
only for invoking async functions.
2021-04-28 12:23:08 -04:00
Tyler Goodlet 81f3558494 Formatting 2021-04-28 12:23:08 -04:00
Tyler Goodlet 897ab79946 Add a no runtime error 2021-04-28 12:23:08 -04:00
Tyler Goodlet 7f38b7225d Aggregate and organize streaming components
Move receive stream into streaming modules and rebrand as a "message
stream".  Factor out cancellation mechanics in `.aclose()` into the
`Context` type which will soon provide the api for for cancelling portal
invocations.  Comment-stage a few methods on both types in anticipation
of a new bi-directional streaming api.  Add a `MsgStream` bidirectional
channel type which will be the eventual type yielded from
`Context.open_stream()`.  Adjust the response/dialog types to be the set
`{'asyncfun', 'asyncgen', 'context'}`. OH, and add async func checking
in `Portal.run()` to catch and error on sync funcs early.
2021-04-28 12:23:08 -04:00
goodboy a5a88e2f64
Merge pull request #205 from goodboy/drop_sync_funcs
Drop sync func invocation support.
2021-04-28 12:14:41 -04:00
Tyler Goodlet d0eacc3fd6 Appease mypy 2021-04-27 12:08:30 -04:00
Tyler Goodlet 89ce1a63e4 Only accept asyncfunc response type 2021-04-27 12:08:30 -04:00
Tyler Goodlet 1f1619c730 Convert all test suite sync funcs 2021-04-27 12:08:30 -04:00
Tyler Goodlet 5798ef6796 Enforce async funcs on callee side, convert arbiter methods 2021-04-27 12:08:30 -04:00
Tyler Goodlet c2a1612bf5 Drop sync function support
You can always wrap a sync function in an async one and there seems to
be no good reason to support invoking them directly especially since
cancellation won't work without some thread hackery. If it's requested
we'll point users to `trio-parallel`.

Resolves #77
2021-04-27 12:08:30 -04:00
Tyler Goodlet be22a2526a Add `Actor.cancel_soon()` for sync self destruct
Add a sync method that can be used to cancel the current actor from
a synchronous context. This is useful in debugging situations where
sync debugger code may need to kill the process tree.

Also, make the internal "lifetime stack" a global var; easier to manage
from client code that may was to add callbacks prior to the actor
runtime being fully setup.
2021-04-27 11:35:28 -04:00
Tyler Goodlet a13b3fe0a5 Bump alpha version 2021-04-27 11:35:16 -04:00
goodboy 0f4f7f05cb
Merge pull request #202 from goodboy/first_pypi_release
First alpha
2021-02-28 22:02:00 -05:00
Tyler Goodlet 34064cd2c7 Tweak classifiers 2021-02-28 20:44:45 -05:00
goodboy ac30374a15
Merge pull request #201 from goodboy/single_func_example
Single func example
2021-02-27 18:04:15 -05:00
Tyler Goodlet e80ab60e0b Add pypi install line 2021-02-27 16:10:57 -05:00
Tyler Goodlet c07a5f6e11 Try fix worker pool link, again 2021-02-27 16:08:44 -05:00
Tyler Goodlet 3766731de5 Add a single func ex for our first one 2021-02-27 14:25:12 -05:00
Tyler Goodlet 2f8dd0199d Tweak version for release, add 3.9 tag 2021-02-25 23:51:53 -05:00
goodboy a0981be1fa
Merge pull request #163 from goodboy/readme_pump
Readme rework draft
2021-02-25 09:24:57 -05:00
Tyler Goodlet 0679eb5646 Further simplifications 2021-02-25 09:10:18 -05:00
Tyler Goodlet ab192741ce Fix code indent and worker pool link 2021-02-25 09:10:18 -05:00
Tyler Goodlet 8ee9007798 Reorg and rejig flow
Thanks to @richardsheridan for many suggestions!
2021-02-25 09:10:18 -05:00
Tyler Goodlet f342c3a0c4 Attempt to add logo 2021-02-25 09:10:18 -05:00
Tyler Goodlet 0c8f9dbce0 Add comma 2021-02-25 09:10:18 -05:00
Tyler Goodlet 71a35aadef Drop worker pool and add debug example 2021-02-25 09:10:18 -05:00
Tyler Goodlet 4a512bc879 Compress terminal cmd line lens 2021-02-25 09:10:18 -05:00
Tyler Goodlet 0e7db46631 Revert auto-gen readme and merge in auto-gen code blocks by hand for now 2021-02-25 09:10:18 -05:00
Tyler Goodlet 92f4b402ad Draft use sphinx-restbuilder to gen readme 2021-02-25 09:10:18 -05:00
Tyler Goodlet 90c987d0ae Further tweaks, add non-scary snippet 2021-02-25 09:10:18 -05:00
Tyler Goodlet 0a5a4d8487 Readme rework draft 2021-02-25 09:10:18 -05:00
goodboy 49a02e6700
Merge pull request #198 from goodboy/kinda_drop_run
Kinda drop run
2021-02-25 09:09:41 -05:00
Tyler Goodlet 47565cfbf3 Use root as default name from `tractor.run()` 2021-02-25 08:51:28 -05:00
Tyler Goodlet cd636b270e Update debug tests to expect 'root' actor name 2021-02-24 13:38:20 -05:00
Tyler Goodlet b7b2436bc1 Remove tractor run from some debug examples 2021-02-24 13:14:40 -05:00
Tyler Goodlet 8fabd27dbe Lint fixes 2021-02-24 13:13:51 -05:00
Tyler Goodlet 983e66b31b Add second implicit-runtime-boot branch 2021-02-24 13:13:45 -05:00
Tyler Goodlet b285db4c58 Factor OCA supervisor into new func 2021-02-24 13:13:38 -05:00
goodboy 35775c6763
Merge pull request #176 from goodboy/eg_worker_poolz
Add our version of the std lib's "worker pool"
2021-02-22 09:55:23 -05:00
Tyler Goodlet 2b3beac4b4 Test putting readme in docs dir 2021-02-21 17:52:04 -05:00
goodboy 35dc56d2c5
Merge pull request #194 from goodboy/sync_breakpoint
Support sync code breakpointing via built-in
2021-02-21 17:49:43 -05:00
Tyler Goodlet a93321e48e Don't run stdlib example as part of test set 2021-02-21 15:41:21 -05:00
Tyler Goodlet 5ffd2d2ab3 Ignore type checks on stdlib overrides 2021-02-21 14:08:23 -05:00
Tyler Goodlet f7e1c526c5 Add `aclosing()` around asyn gen loop 2021-02-21 14:08:23 -05:00
Tyler Goodlet 07653bc02e Run parallel examples 2021-02-21 14:08:23 -05:00
Tyler Goodlet a90a2b8787 Contain the error 2021-02-21 14:08:23 -05:00
Tyler Goodlet da8c8c1773 Add concise readme example 2021-02-21 14:08:23 -05:00
Tyler Goodlet 57a24cdcf8 More comments 2021-02-21 14:08:23 -05:00
Tyler Goodlet 9b07e9ad7c Yield results on demand using a mem chan 2021-02-21 14:08:23 -05:00
Tyler Goodlet 3c320f467f Remove use of tractor.run() 2021-02-21 14:08:23 -05:00
Tyler Goodlet 2555765882 Make new paralellism example space 2021-02-21 14:08:23 -05:00
Tyler Goodlet 7db5739143 Add our version of the std lib's "worker pool"
This is a draft of the `tractor` way to implement the example from the
"processs pool" in the stdlib's `concurrent.futures` docs:

https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example

Our runtime is of course slower to startup but once up we of course get
the same performance, this confirms that we need to focus some effort
not on warm up and teardown times.  The mp forkserver method definitely
improves startup delay; rolling our own will likely be a good hot spot
to play with.

What's really nice is our implementation is done in approx 10th the code ;)

Also, do we want offer and interface that yields results as they arrive?

Relates to #175
2021-02-21 14:08:23 -05:00
Tyler Goodlet 7888ef6f01 Fix more stdlib typing issues with latest mypy 2021-02-21 12:48:03 -05:00
Tyler Goodlet 109066dda9 Support sync code breakpointing via built-in
Override `breakpoint()` for sync code making it work
properly with `trio` as per:

https://github.com/python-trio/trio/issues/1155#issuecomment-742964018

Relates to #193
2021-02-21 12:36:00 -05:00
goodboy d8b6c0093c
Merge pull request #188 from goodboy/we_aint_got_zombie_shields
We aint got zombie shields
2021-01-18 11:01:14 -05:00
goodboy 8fdab8e0be
Merge pull request #187 from goodboy/deprecate_rpcmodpaths
Begin rpc_module_paths deprecation
2021-01-14 18:28:16 -05:00
Tyler Goodlet 9f4e497b9c Don't shield proc waits 2021-01-14 18:21:26 -05:00
Tyler Goodlet 14d60147fa Add an example which breaks shielded proc waits 2021-01-14 18:21:26 -05:00
Tyler Goodlet e546ead2ff Pub sub internals type fixes 2021-01-14 18:20:59 -05:00
Tyler Goodlet 3df001f3a9 Fix msg pub global lock sharing
Using `None` as the default key for a `@msg.pub` can cause conflicts if
there is more then one "taskless" (no tasks={,} passed) pub offered on
an actor... So instead use the first trio "task name" (usually just the
function name) instead thus avoiding this very hard to debug and
understand problem.

Probably should throw in a test but I'm super lazy today.
2021-01-14 18:20:49 -05:00
Tyler Goodlet 5ed5d18ccb Begin rpc_module_paths deprecation 2021-01-08 22:08:45 -05:00
goodboy dfaf1e3631
Merge pull request #185 from goodboy/implicit_runtime
Implicit runtime
2021-01-08 22:07:43 -05:00
Tyler Goodlet 32b10681a1 Drop tractor.run() from @tractor_test 2021-01-08 20:56:03 -05:00
Tyler Goodlet 41a4de5af2 Use actual task name lel 2021-01-08 20:55:42 -05:00
Tyler Goodlet 59421d9f3a Fix some borked tests 2021-01-08 20:55:11 -05:00
Tyler Goodlet 333ddcf93f Can we ever really appease mypy? 2021-01-03 11:18:31 -05:00
Tyler Goodlet 0bb2163b0c Implicitly open root actor on first nursery use. 2021-01-02 21:39:30 -05:00
Tyler Goodlet bd3059f01b Allow for error bypass 2021-01-02 21:39:30 -05:00
Tyler Goodlet 803152ead5 Use explicit named args 2021-01-02 21:39:30 -05:00
Tyler Goodlet e6245671b0 Use runtime level on attach 2021-01-02 21:38:55 -05:00
goodboy bfe500060f
Merge pull request #181 from goodboy/drop_tractor_run
Deprecate `tractor.run()`
2020-12-28 12:53:04 -05:00
goodboy 3a5daa5b7a
Merge pull request #169 from goodboy/py3.9
Py3.9
2020-12-27 14:30:39 -05:00
Tyler Goodlet 723fb17394 Add deprecation warning to run() 2020-12-27 13:29:30 -05:00
Tyler Goodlet f05534e472 Re-org root actor startup into context manager
This begins the move to dropping support for `tractor.run()` which we
don't really need since the runtime is started (as it always has been)
from a new sub-task / nursery. Instead this introduces starting the
actor tree through a `open_root_actor()` async context manager which
we'll likely implicitly call (from the root) on the first use of an
actor nursery.

Drop `_actor._start_actor()` and factor its contents into this new api.
Make `run()` and `run_daemon()` use `open_root_actor()` until we decide
to remove them.

Relates to #168 and #177
2020-12-27 13:29:30 -05:00
Tyler Goodlet b040cdc0c9 Add null byte guard from mainline 2020-12-27 13:28:54 -05:00
Tyler Goodlet 02ac20a43c Include Python 3.9 in CI 2020-12-27 13:28:54 -05:00
goodboy f427c98cf6
Merge pull request #178 from goodboy/denoise_logging
Denoise logging
2020-12-27 13:10:17 -05:00
Tyler Goodlet 5127effd88 Drop warning level logging assert(s) 2020-12-26 15:45:55 -05:00
Tyler Goodlet 6b650c0fe6 Add a "runtime" log level 2020-12-26 15:45:45 -05:00
Tyler Goodlet 0d05a727b6 Use error log level by default 2020-12-25 15:28:32 -05:00
Tyler Goodlet c28ffd8b1c Don't exception log multi-cancels 2020-12-25 15:23:59 -05:00
Tyler Goodlet 5d7a4e2b12 Denoise some common teardown "errors" to warnings. 2020-12-25 15:10:20 -05:00
Tyler Goodlet 8522f90000 Add type annots to exceptions mod
Also add a `is_multi_cancelled()` predicate to test for
`trio.MultiError`s that contain entirely cancel signals.

Resolves #125
2020-12-25 15:07:36 -05:00
goodboy f4f39c29f3
Merge pull request #174 from goodboy/func_refs_always
Allow passing function refs to `Portal.run()`
2020-12-22 19:45:10 -05:00
Tyler Goodlet 4bf9b27f57 Drop all .statespace refs; it was a silly idea 2020-12-22 19:33:16 -05:00
Tyler Goodlet 0eba5f4708 Port remaining tests to pass func refs 2020-12-22 10:39:47 -05:00
Tyler Goodlet 493f2efb50 Port tests to `Portal.run_from_ns()` 2020-12-22 10:39:47 -05:00
Tyler Goodlet 9fd3c42eb1 Port inter-process method calls to `Portal.run_from_ns()` 2020-12-22 10:39:47 -05:00
Tyler Goodlet 7134f35d6e Add `Portal.run_from_ns()`
It turns out in order to maintain our sneaky little "call an `Actor`
method in this remote process" we still need the ability to invoke
functions from a namespace. We're currently using a "self" namespace as
a way to do this for internal inter-process method calling.  Either way,
I see no reason not to keep a public method for this invoke style (we
just won't market it) since it is still how the machinery works
underneath.
2020-12-22 10:39:47 -05:00
Tyler Goodlet a668f714d5 Allow passing function refs to `Portal.run()`
This resolves and completes #69 allowing all RPC invocation APIs to pass
function references directly instead of explicit `str` names for the
target namespace and function (this is still done implicitly
underneath).  This brings us closer to `trio`'s task running API as well
as acknowledges that any inter-host RPC system (and API) will likely
need to be implemented on top of local RPC primitives anyway. Even if
this ends up **not** being true we can always go to "function stubs" as
part of our IAC protocol or, add a new method to do explicit namespace
calls: `.run_from_module()` or whatever everyone votes on.

Resolves #69

Further, this commit drops `Actor.statespace` from the entire system
since a user can easily get this same functionality using module
level variables. Fix docs to match all these changes (luckily mostly
already done due to example scripts referencing).
2020-12-21 09:09:55 -05:00
goodboy c3b209fa4f
Merge pull request #173 from goodboy/fix_debug_tests_in_ci_again
Fix debug tests in ci again
2020-12-19 00:06:29 -05:00
Tyler Goodlet dc475b54ab More obnoxious CI timeout handling 2020-12-18 19:26:29 -05:00
Tyler Goodlet 0d67ce4abc Fix collections type import for py3.10 2020-12-18 17:58:07 -05:00
Tyler Goodlet a95488ad2f Handle pexpect's internal timeout 2020-12-18 17:57:44 -05:00
goodboy 1701493087
Merge pull request #171 from goodboy/stream_channel_shield
Add a way to shield a stream's underlying channel
2020-12-18 11:17:08 -05:00
Tyler Goodlet 47f68a0532 Skip debugger tests on non-trio backends 2020-12-17 16:37:05 -05:00
Tyler Goodlet 797bcc1df2 Handle early timeouts on last debugger test 2020-12-17 13:35:45 -05:00
Tyler Goodlet 201771a521 'Fix mypy, change interal type name to `ReceiveStream`, settle on `.shield()`' 2020-12-17 12:01:49 -05:00
Tyler Goodlet 15ead6b561 Add a way to shield a stream's underlying channel
Add a ``tractor._portal.StreamReceiveChannel.shield_channel()`` context
manager which allows for avoiding the closing of an IPC stream's
underlying channel for the purposes of task re-spawning. Sometimes you
might want to cancel a task consuming a stream but not tear down the IPC
between actors (the default). A common use can might be where the task's
"setup" work might need to be redone but you want to keep the
established portal / channel in tact despite the task restart.

Includes a test.
2020-12-16 21:42:28 -05:00
goodboy a510eb0b2b
Merge pull request #170 from goodboy/pdb_madness
End the `pdb` SIGINT handling madness
2020-12-12 14:53:31 -05:00
Tyler Goodlet 0118589875 Add race case handling for mp backend 2020-12-12 13:30:14 -05:00
Tyler Goodlet d497078eb7 Appease 3.8 mypy 2020-12-11 20:04:56 -05:00
Tyler Goodlet e51c2620e5 End the `pdb` SIGINT handling madness
Turns out this is a lower level issue in terms of the stdlib's default
`pdb.Pdb` settings and how they conflict with `trio`s cancellation and
KBI handling. The details are hashed out more thoroughly in
python-trio/trio#1155. Maybe we can get a fix in trio so things are
solved under our feet :)
2020-12-11 00:15:09 -05:00
goodboy e27dc2e244
Merge pull request #164 from goodboy/clean_log_header
Drop duplicate project-package name in msg header
2020-12-09 10:25:31 -05:00
Tyler Goodlet 12f425137c Drop duplicate project-package name in msg header 2020-11-03 12:15:49 -05:00
goodboy 2674c54c0b
Merge pull request #162 from goodboy/debug_refine
Debug refine
2020-10-15 23:31:08 -04:00
Tyler Goodlet a8406c8626 Toss in another tests with daemon subactors 2020-10-15 23:16:56 -04:00
Tyler Goodlet 1580cc6fa0 Add explanation to module load error 2020-10-15 23:16:56 -04:00
Tyler Goodlet 5822d38ae4 Set _is_root runtime var in _main() 2020-10-15 23:16:54 -04:00
goodboy 7ddc4db041
Merge pull request #161 from goodboy/drop_warn
Drop warn
2020-10-14 14:15:50 -04:00
Tyler Goodlet bd34140a5f Revert "Try out sync sleep on windows again"
Nope, still doesn't work...

This reverts commit 676cdafa8f.
2020-10-14 14:06:04 -04:00
Tyler Goodlet 3b8684f655 Always call `Actor.cancel()` at end of root's main task
It's simpler and the only real logical difference is logging messages.
This should also give us an overall consistent tear down sequence.
2020-10-14 13:59:57 -04:00
Tyler Goodlet 676cdafa8f Try out sync sleep on windows again 2020-10-14 13:48:41 -04:00
Tyler Goodlet 02a9cac557 Drop remaining warn()s 2020-10-14 13:48:14 -04:00
Tyler Goodlet f60321a35a Always cancel service nursery last
The channel server should be torn down *before* the rpc
task/service nursery. Do this explicitly even in the root's main task
to avoid a strange hang I found in the pubsub tests. Start dropping
the `warnings.warn()` usage.
2020-10-14 13:46:05 -04:00
goodboy 7115d6c3bd
Merge pull request #129 from goodboy/multiproc_debug
Wen? Multiprocessing-native debugger now!
2020-10-14 09:14:03 -04:00
Tyler Goodlet 61a8df358c Comments tweak 2020-10-14 09:06:40 -04:00
Tyler Goodlet bba47e4c7a Add gh actions badge 2020-10-13 21:45:54 -04:00
Tyler Goodlet 1c25f25ab0 Drop travisCI; it's slower and has worse windows support. 2020-10-13 21:27:17 -04:00
Tyler Goodlet 0177268f13 Report on skipped tests 2020-10-13 16:30:19 -04:00
Tyler Goodlet 1b6ee2ecf6 Skip sync sleep test on windows 2020-10-13 15:26:46 -04:00
Tyler Goodlet 15edcc622d Skip it on windows too 2020-10-13 15:13:46 -04:00
Tyler Goodlet fd59f4ad16 On windows .spawn dne? 2020-10-13 14:56:26 -04:00
Tyler Goodlet a934eb063c Factor `repodir()` helper into conftest.py 2020-10-13 14:49:31 -04:00
Tyler Goodlet a49deb46f1 Revert "Make tests a package (for relative imports)"
This reverts commit 1710b642a5.
2020-10-13 14:42:16 -04:00
Tyler Goodlet 666966097a Revert "Change to relative conftest.py imports"
This reverts commit 2b53c74b1c.
2020-10-13 14:42:02 -04:00
Tyler Goodlet e3c26943ba Support debug mode only on the trio backend 2020-10-13 14:20:44 -04:00
Tyler Goodlet ba52de79e1 Skip quad ex on local mp tests as well 2020-10-13 14:20:19 -04:00
Tyler Goodlet 24ef919334 Skip sync sleep test on mp backend 2020-10-13 14:16:20 -04:00
Tyler Goodlet 08ff989631 Add some comments 2020-10-13 11:59:18 -04:00
Tyler Goodlet 573b8fef73 Add better actor cancellation tracking
Add `Actor._cancel_called` and `._cancel_complete` making it possible to
determine whether the actor has started the cancellation sequence and
whether that sequence has fully completed. This allows for blocking in
internal machinery tasks as necessary. Also, always trigger the end of
ongoing rpc tasks even if the last task errors; there's no guarantee the
trio cancellation semantics will guarantee us a nice internal "state"
without this.
2020-10-13 11:48:52 -04:00
Tyler Goodlet 0ce6d2b55c Add `pexpect` dep for debugger tests 2020-10-13 11:04:16 -04:00
Tyler Goodlet c375a2d028 mypy fixes 2020-10-13 11:03:55 -04:00
Tyler Goodlet 1710b642a5 Make tests a package (for relative imports) 2020-10-13 10:50:21 -04:00
Tyler Goodlet c41e5c8313 Fix missing await 2020-10-13 00:45:29 -04:00
Tyler Goodlet a88a6ba7a3 Add pattern matching to test 2020-10-13 00:36:34 -04:00
Tyler Goodlet 79c38b04e7 Report `trio.Cancelled` when exhausting portals..
For reliable remote cancellation we need to "report" `trio.Cancelled`s
(just like any other error) when exhausting a portal such that the
caller can make decisions about cancelling the respective actor if need
be.

Resolves #156
2020-10-12 23:28:36 -04:00
Tyler Goodlet 0e344eead8 Add a "cancel arrives during a sync sleep in child" test
This appears to demonstrate the same bug found in #156. It looks like
cancelling a subactor with a child, while that child is running sync code,
can result in the child never getting cancelled due to some strange
condition where the internal nurseries aren't being torn down as
expected when a `trio.Cancelled` is raised.
2020-10-12 23:25:22 -04:00
Tyler Goodlet acb4cb0b2b Add test showing issue with child in tty lock when cancelled 2020-10-07 06:08:31 -04:00
Tyler Goodlet 07112089d0 Add mention subactor uid during locking 2020-10-07 05:53:26 -04:00
Tyler Goodlet abf8bb2813 Add a deep nested error propagation test 2020-10-06 09:21:53 -04:00
Tyler Goodlet 2b53c74b1c Change to relative conftest.py imports 2020-10-05 11:58:58 -04:00
Tyler Goodlet 371025947a Add a multi-subactor test where the root errors 2020-10-05 11:58:58 -04:00
Tyler Goodlet d43d367153 Facepalm: tty locking from root doesn't require an extra task 2020-10-05 11:58:58 -04:00
Tyler Goodlet 31c1a32d58 Add re-entrant root breakpoint test; demonstrates a bug.. 2020-10-05 11:58:58 -04:00
Tyler Goodlet 83a45119e9 Add "root mailbox" contact info passing
Every subactor in the tree now receives the socket (or whatever the
mailbox type ends up being) during startup and can call the new
`tractor._discovery.get_root()` function to get a portal to the current
root actor in their tree. The main reason for adding this atm is to
support nested child actors gaining access to the root's tty lock for
debugging.

Also, when a channel disconnects from a message loop, might as well kill
all its rpc tasks.
2020-10-05 11:58:58 -04:00
Tyler Goodlet e387e8b322 Add a multi-subactor test with nesting 2020-10-05 11:58:58 -04:00
Tyler Goodlet a2151cdd4d Allow re-entrant breakpoints during pdb stepping 2020-10-05 11:58:58 -04:00
Tyler Goodlet 73a32f7d9c Add initial subactor debug tests 2020-10-05 11:58:58 -04:00
Tyler Goodlet 9067bb2a41 Shorten arbiter contact timeout 2020-10-05 11:58:58 -04:00
Tyler Goodlet 0a2a94fee0 Add initial root actor debugger tests 2020-10-05 11:58:58 -04:00
Tyler Goodlet 29ed065dc4 Ack our inability to hard kill sub-procs 2020-09-28 13:56:42 -04:00
Tyler Goodlet fc2cb610b9 Make "hard kill" just a `Process.terminate()`
It's not like any of this code is really being used anyway since we
aren't indefinitely blocking for cancelled subactors to terminate (yet).
Drop the `do_hard_kill()` bit for now and just rely on the underlying
process api. Oh, and mark the nursery as cancelled asap.
2020-09-28 13:49:45 -04:00
Tyler Goodlet d7a472c7f2 Update our debugging example to wait on results 2020-09-28 13:13:53 -04:00
Tyler Goodlet 5dd2d35fc5 Huh, maybe we don't need to block SIGINT
Seems like the request task cancel scope is actually solving all the
deadlock issues and masking SIGINT isn't changing much behaviour at all.
I think let's keep it unmasked for now in case it does turn out useful
in cancelling from unrecoverable states while in debug.
2020-09-28 13:11:22 -04:00
Tyler Goodlet 25e93925b0 Add a cancel scope around child debugger requests
This is needed in order to avoid the deadlock condition where
a child actor is waiting on the root actor's tty lock but it's parent
(possibly the root) is waiting on it to terminate after sending a cancel
request. The solution is simple: create a cancel scope around the
request in the child and always cancel it when a cancel request from the
parent arrives.
2020-09-28 13:02:33 -04:00
Tyler Goodlet 363498b882 Disable SIGINT handling in child processes
There seems to be no good reason not too since our cancellation
machinery/protocol should do this work when the root receives the
signal. This also (hopefully) helps with some debugging race condition
stuff.
2020-09-28 09:24:36 -04:00
Tyler Goodlet f1b242f913 Block SIGINT handling while in the debugger
This seems to prevent a certain class of bugs to do with the root actor
cancelling local tasks and getting into deadlock while children are
trying to acquire the tty lock. I'm not sure it's the best idea yet
since you're pretty much guaranteed to get "stuck" if a child activates
the debugger after the root has been cancelled (at least "stuck" in
terms of SIGINT being ignored). That kinda race condition seems to still
exist somehow: a child can "beat" the root to activating the tty lock
and the parent is stuck waiting on the child to terminate via its
nursery.
2020-09-28 08:54:21 -04:00
goodboy ce5c52905d
Merge pull request #154 from goodboy/matrix
Add matrix room link
2020-09-24 13:05:35 -04:00
Tyler Goodlet 76e1c83161 Add matrix room link 2020-09-24 11:12:45 -04:00
Tyler Goodlet 9e1d9a8ce1 Add an internal context stack
This aids with tearing down resources **after** the crash handling and
debugger have completed. Leaving this internal for now but should
eventually get a public convenience function like
`tractor.context_stack()`.
2020-09-24 10:12:33 -04:00
Tyler Goodlet 09daba4c9c Explicitly handle `debug_mode` flag correctly 2020-09-24 10:12:33 -04:00
Tyler Goodlet 8b6e9f5530 Port to new debug api, set `_is_root` state flag on startup 2020-09-24 10:12:33 -04:00
Tyler Goodlet 150179bfe4 Support entering post mortem on crashes in root actor 2020-09-24 10:12:33 -04:00
Tyler Goodlet 291ecec070 Maybe not sticky by default 2020-09-24 10:12:33 -04:00
Tyler Goodlet bd157e05ef Port to service nursery 2020-09-24 10:12:33 -04:00
Tyler Goodlet fd5fb9241a Sparsen some lines 2020-09-24 10:12:33 -04:00
Tyler Goodlet ebb21b9ba3 Support re-entrant breakpoints
Keep an actor local (bool) flag which determines if there is already
a running debugger instance for the current process. If another task
tries to enter in this case, simply ignore it since allowing entry may
result in a deadlock where the new task will be sync waiting on the
parent stdio lock (a case that will never arrive due to the current
debugger's active use of it).

In the future we may want to allow FIFO queueing of local tasks where
instead of ignoring re-entrant breakpoints we allow tasks to async wait
for debugger release, though not sure the implications of that since
you'd likely want to support switching the debugger to the new task and
that could cause deadlocks where tasks are inter-dependent. It may be
more sane to just error on multiple breakpoint requests within an actor.
2020-09-24 10:12:33 -04:00
Tyler Goodlet f9ef3fc5de Cleanups and more comments 2020-09-24 10:12:33 -04:00
Tyler Goodlet 68773d51fd Always expose the debug module 2020-09-24 10:12:33 -04:00
Tyler Goodlet abaa2f5da0 Drop uneeded `parent_chan_cs()` cancel call 2020-09-24 10:12:33 -04:00
Tyler Goodlet efd7095cf8 Add pdbpp as dep 2020-09-24 10:12:32 -04:00
Tyler Goodlet f7cd2be039 Play with re-entrant trace 2020-09-24 10:12:10 -04:00
Tyler Goodlet 8eb9a742dd Add multi-process debugging support using `pdbpp`
This is the first step in addressing #113 and the initial support
of #130. Basically this allows (sub)processes to engage the `pdbpp`
debug machinery which read/writes the root actor's tty but only in
a FIFO semaphored way such that no two processes are using it
simultaneously. That means you can have multiple actors enter a trace or
crash and run the debugger in a sensible way without clobbering each
other's access to stdio. It required adding some "tear down hooks" to
a custom `pdbpp.Pdb` type such that we release a child's lock on the
parent on debugger exit (in this case when either of the "continue" or
"quit" commands are issued to the debugger console).

There's some code left commented in anticipation of full support for
issue #130 where we're need to actually capture and feed stdin to the
target (remote) actor which won't necessarily being running on the same
host.
2020-09-24 10:12:10 -04:00
Tyler Goodlet e7ee0fec34 Pass a copy of the expected exposed modules 2020-09-24 10:12:10 -04:00
Tyler Goodlet 1d1c881fd7 WIP debugging test script 2020-09-24 10:12:10 -04:00
Tyler Goodlet b06d4b023e Add support for "debug mode"
When enabled a crashed actor will connect to the parent with `pdb`
in post mortem mode.
2020-09-24 10:12:10 -04:00
Tyler Goodlet b11e91375c Initial attempt at multi-actor debugging
Allow entering and attaching to a `pdb` instance in a child process.
The current hackery is to have the child make an rpc to the parent and
ask it to hijack stdin, once complete the child enters a `pdb` blocking
method. The parent then relays all stdin input to the child thus
controlling the "remote" debugger.

A few things were added to accomplish this:
- tracking the mapping of subactors to their parent nurseries
- in the root actor, cancelling all nurseries under the root `trio` task
  on cancellation (i.e. `Actor.cancel()`)
- pass a "runtime vars" map down the actor tree for propagating global state
2020-09-24 10:12:10 -04:00
Tyler Goodlet 8c97f7bbb3 Create runtime variables 2020-09-24 10:12:10 -04:00
goodboy 196cf14211
Merge pull request #152 from guilledk/gh_actions
Add Github Actions CI support
2020-09-03 12:22:27 -04:00
Guillermo Rodriguez 5e3ce765dd
Drop mac support, will continue the experiment on another branch 2020-09-03 10:41:09 -03:00
Guillermo Rodriguez ad68ff665f
Missing a platform.system() check 2020-09-03 09:57:04 -03:00
Guillermo Rodriguez c993e36e95
Simplified CI detection 2020-09-03 09:44:24 -03:00
Guillermo Rodriguez 03e5852acf
Added some missing CI integration pieces 2020-09-02 13:19:42 -03:00
Guillermo Rodriguez 4d7a16b304
Lower timeout and added spawn_backend to name of jobs 2020-09-02 11:31:10 -03:00
Guillermo Rodriguez 406ded7311
Experimental mac testing 2020-09-02 11:18:12 -03:00
Guillermo Rodriguez 3595317b00
Removed --disable-vnet parameter to pytest that was left after experimenting with this file in the multihost testing branch 2020-09-02 11:16:05 -03:00
Guillermo Rodriguez 865e932107
Initial commit 2020-09-02 11:12:08 -03:00
goodboy 1cbc098721
Merge pull request #150 from guilledk/typelog_sphinx_theme
Install docs requirements in travis tests
2020-08-31 11:39:40 -04:00
Guillermo Rodriguez f05db6841d
Install docs requirements in travis tests! 2020-08-31 12:33:25 -03:00
goodboy 440dae4859
Merge pull request #137 from guilledk/typelog_sphinx_theme
Changed docs theme to typelog
2020-08-31 11:22:19 -04:00
Guillermo Rodriguez a6f7b0df7c
Small grammar fix 2020-08-31 12:17:59 -03:00
Guillermo Rodriguez 1bee78837b
Added logo, fixed github links and grammar issues 2020-08-31 11:49:14 -03:00
Guillermo Rodriguez 13de7991d9
Add link to trio process spawning docs 2020-08-31 10:08:32 -03:00
Guillermo Rodriguez 3536e73df7
Changed docs theme to typelog, also removed all mentions to trio-run-in-process. 2020-08-31 10:08:04 -03:00
goodboy 4da16325f3
Merge pull request #144 from goodboy/dereg_on_channel_aclose
Fix for dereg failure on manual stream close leading to an internal nursery composition rework.
2020-08-13 13:56:47 -04:00
Tyler Goodlet 451170bb63 Pass explicit kwargs to new discovery test funcs 2020-08-13 13:26:08 -04:00
Tyler Goodlet ec5d443ee5 Always log actor errors 2020-08-13 11:55:22 -04:00
Tyler Goodlet 863a4b7933 Update copyright date 2020-08-13 11:55:03 -04:00
Tyler Goodlet 0c8dcd0ec5 Use allocated arbiter port in local reg test 2020-08-13 11:54:37 -04:00
Tyler Goodlet 1ae0efb033 Make rpc_module_paths a list 2020-08-13 11:53:45 -04:00
Tyler Goodlet 8a995beb6a Docs fixes 2020-08-08 22:29:57 -04:00
Tyler Goodlet 292513b353 Module define default accept addr 2020-08-08 20:58:04 -04:00
Tyler Goodlet b3eba00c3a Appease the great mypy 2020-08-08 20:57:43 -04:00
Tyler Goodlet 42be410076 Handle mp accept_addr 2020-08-08 20:27:43 -04:00
Tyler Goodlet acd5b80f4c Add close channel test with remote arbiter 2020-08-08 15:17:04 -04:00
Tyler Goodlet c821690834 Actor cancellation is now more latent; loosen timeing 2020-08-08 15:16:10 -04:00
Tyler Goodlet 7f74182a8a Never allow more then info logging in daemon; causes blocking 2020-08-08 15:15:43 -04:00
Tyler Goodlet 8477d21499 Restructure actor runtime nursery scoping
In an effort acquire more deterministic actor cancellation,
this adds a clearer and more resilient (whilst possibly a bit
slower) internal nursery structure with explicit semantics for
clarifying the task-scope shutdown sequence.

Namely, on cancellation, the explicit steps are now:
- cancel all currently running rpc tasks and wait
  for them to complete
- cancel the channel server and wait for it to complete
- cancel the msg loop for the channel with the immediate parent
- de-register with arbiter if possible
- wait on remaining connections to release
- exit process

To accomplish this add a new nursery called the "service nursery" which
spawns all rpc tasks **instead of using** the "root nursery". The root
is now used solely for async launching the msg loop for the primary
channel with the parent such that it is (nearly) the last thing torn
down on cancellation.

In the future it should also be possible to have `self.cancel()` return
a result to the parent once the runtime is sure that the rest of the
shutdown is atomic; this would allow for a true unbounded shield in
`Portal.cancel_actor()`. This will likely require that the error
handling blocks in `Actor._async_main()` are moved "inside" the root
nursery block such that the msg loop with the parent truly is the last
thing to terminate.
2020-08-08 14:55:41 -04:00
Tyler Goodlet 90c7fa6963 Allow shielding in `open_portal()` 2020-08-08 14:47:52 -04:00
Tyler Goodlet 532429aec9 Harden `trio` spawner process waiting
Always shield waiting for he process and always run
``trio.Process.__aexit__()`` on teardown. This enforces
that shutdown happens to due cancellation triggered inside
the sub-actor instead of the process being killed externally
by the parent.
2020-08-08 14:43:25 -04:00
Tyler Goodlet fe45d99f65 Allow opening a portal through an existing channel 2020-08-07 12:02:06 -04:00
Tyler Goodlet ae8488a578 Always shield de-register step with arbiter 2020-08-07 11:36:26 -04:00
Tyler Goodlet 3a868fec30 Cancel root nursery to trigger failure
The real issue is if the root nursery gets cancelled prior to
de-registration with the arbiter. This doesn't seem easy to
reproduce by side effect of a KBI however that is how it was
discovered in practise.
2020-08-07 11:34:17 -04:00
Tyler Goodlet d2d8860dad Add test for dereg failure on manual stream close
There was code from the last de-registration fix PR that I had commented
(to do with shielding arbiter dereg steps in `Actor._async_main()`) because
the block didn't seem to make a difference under infinite streaming
tests. Turns out it **for sure** is needed under certain conditions (likely
if the actor's root nursery is cancelled prior to actor nursery exit).
This was an attempt to simulate the failure mode if you manually close the
stream **before** cancelling the containing **actor**.

More tests to come I guess.
2020-08-07 09:16:01 -04:00
Guillermo Rodriguez 8da45eedf4
Merge pull request #143 from goodboy/ensure_deregister
Ensure actors de-register with arbiter when cancelled during infitinite streaming.
2020-08-04 12:19:02 -03:00
Tyler Goodlet 09ae51900d Better clarify uid comment 2020-08-04 09:52:49 -04:00
Tyler Goodlet 4f92cfe74f Don't `.aclose` `trio` processes until the very end
Trio will kill subprocesses via `Process.__aexit__()` using a `finally:`
block (which, yes, will get triggered on cancellation) so we avoid that
until true process "tear down" since subactors do many things during
graceful shutdown (such as de-registering from the name discovery
system). Oddly this only seems to be an issue during cancellation of
infinite stream consumption.

Resolves #141
2020-08-03 18:57:00 -04:00
Tyler Goodlet ae9016c06a Log on KBI cancelled termination 2020-08-03 18:46:18 -04:00
Tyler Goodlet a24c6bfdd2 Correctly catch cancelled nursery case (purely for logging) 2020-08-03 18:44:50 -04:00
Tyler Goodlet 56b81f07e5 Return `Dict[Tuple, Tuple]` from `.get_registry()` 2020-08-03 18:42:23 -04:00
Tyler Goodlet fbd68d2d91 Allow for tuple keys with std `msgpack` 2020-08-03 18:41:21 -04:00
Tyler Goodlet a5279f80a7 Actually reproduce the de-registration problem
This truly reproduces #141. It turns out the problem only occurs when
we're cancelled in the middle of consuming "infinite streams".
Good news is this tests a lot of edge cases :)
2020-08-03 18:28:09 -04:00
Tyler Goodlet 699bfd1857 Run unreg on cancel tests with remote arbiter as well 2020-08-03 15:41:41 -04:00
Tyler Goodlet 639299e6eb Expose a `.get_registry()` method on the arbiter 2020-08-03 15:40:41 -04:00
Tyler Goodlet 2ccaa94c60 Move daemon fixture up to conftest 2020-08-03 15:39:54 -04:00
Tyler Goodlet 0d9483376d Test cancel with SIGINT on non-windows as well 2020-08-03 13:01:56 -04:00
Tyler Goodlet cd2d8c217a Test that subactors deregister on cancel 2020-08-03 12:53:03 -04:00
goodboy a399bd3033
Merge pull request #133 from guilledk/drop_cloudpickle
Drop cloudpickle dependency
2020-07-29 18:24:27 -04:00
Guillermo Rodriguez 3e29fcf1ea
Docstring to the top\!, and redundant spaces goodbye\! 2020-07-29 15:39:38 -03:00
Guillermo Rodriguez a565d38251
Merge pull request #2 from goodboy/start_up_sequence_trickery
Start up sequence trickery
2020-07-29 15:02:51 -03:00
Tyler Goodlet da56d0f043 Add slight delays to SIGINT tests on mp 2020-07-29 13:27:15 -04:00
Tyler Goodlet 8f17c89cf9 Skip **every** quad test for mp on ci 2020-07-29 10:26:19 -04:00
Tyler Goodlet 9a40291d4a Repair startup sequence around parent state transfer
In order to have reliable subactor startup we need the following
sequence to take place:
- connect to the parent actor, handshake and receive runtime state
- load exposed modules into memory
- start the channel server up fully using the provided bind address
- finally, start processing new messages from the parent

Add a bunch more comments to clarify all this.
2020-07-28 22:25:22 -04:00
Guillermo Rodriguez 0a5691e0a8
Removed arbiter_addr local, and bind_addr is now passed through channel, in early child actor init. 2020-07-28 11:55:11 -03:00
Guillermo Rodriguez 8b44ec7a5d
Actually dropping the cloudpickle dependency from setup.py 2020-07-27 21:10:04 -03:00
Guillermo Rodriguez ef053eb070
Added named arguments to child init, and now passing less of them. 2020-07-27 21:05:00 -03:00
Guillermo Rodriguez e5dbf14ec3
Onlt await params in trio mode 2020-07-27 15:20:55 -03:00
Guillermo Rodriguez 2a407be532
Now passing additional initialization parameters through channel early after handshake. 2020-07-27 14:55:37 -03:00
goodboy 2cc4d7ce04
Merge pull request #135 from goodboy/fix_win_ci_again
Fix windows CI, again.
2020-07-27 13:19:01 -04:00
Tyler Goodlet 5715fd4599 Skip streaming tests 2020-07-27 12:20:46 -04:00
Tyler Goodlet e8a38e4d15 Fix cancelled type handling 2020-07-27 11:15:05 -04:00
goodboy ed96672136
Merge pull request #128 from goodboy/flaky_tests
Drop trio-run-in-process,  use pure trio process spawner, test out of channel ctrl-c subactor cancellation
2020-07-26 23:59:58 -04:00
Tyler Goodlet 3c7ec72f8e Fix SIGINT test names 2020-07-26 23:37:44 -04:00
Tyler Goodlet 5a27065a10 Finally tame the super flaky tests
- ease up on first stream test run deadline
- skip streaming tests in CI for mp backend, period
- give up on > 1 depth nested spawning with mp
- completely give up on slow spawning on windows
2020-07-26 22:53:40 -04:00
Tyler Goodlet 891edbab5f Run the trio spawner in nested tests 2020-07-25 18:19:17 -04:00
Tyler Goodlet dddbeb0e71 Run Windows on trio and mp backends
The new pure trio spawning backend uses `subprocess` internally which is
also supported on windows so let's run it in CI.
2020-07-25 13:41:48 -04:00
Tyler Goodlet 7c3928f0bf Oh mypy.. 2020-07-24 17:31:24 -04:00
Tyler Goodlet d3acb8d061 Wait on proc before killing stdio 2020-07-24 17:08:52 -04:00
Tyler Goodlet efde3a5773 Simplify the `_child.py` script
We don't really need stdin for anything but passing the entry point and
detaching it seemed to just cause errors on cancellation teardown.
2020-07-24 17:08:52 -04:00
Tyler Goodlet aa620fe61d Use `trio.Process.__aexit__()` and pass the actor uid
Using the context manager interface does some extra teardown beyond simply
calling `.wait()`. Pass the subactor's "uid" on the exec line for
debugging purposes when monitoring the process tree from the OS.
Hard code the child script module path to avoid a double import warning.
2020-07-24 17:08:52 -04:00
Tyler Goodlet a215df8dfc Add true ctrl-c tests using an out-of-band SIGINT
Verify ctrl-c, as a user would trigger it, properly cancels the actor
tree. This was an issue with `trio-run-in-process` that clearly wasn't
being handled correctly but for sure is now with the plain old
`trio` process spawner.

Resolves #115
2020-07-24 17:08:52 -04:00
Tyler Goodlet 4de75c3a9d Test cancel via api and keyboard interrupt
An initial attempt to discover an issue with trio-run-inprocess.
This is a good test to have regardless.
2020-07-24 17:08:52 -04:00
Tyler Goodlet 5adf2f3b0c Add logging to some cancel tests 2020-07-24 17:08:52 -04:00
Tyler Goodlet 4516febe26 Make sure to wait trio processes on teardown 2020-07-24 17:08:52 -04:00
Tyler Goodlet 0b305fd78a Change spawn method name in `Actor.load_modules()` 2020-07-24 17:08:52 -04:00
Tyler Goodlet 0936bdc592 Add back subactor logging 2020-07-24 17:08:52 -04:00
Guillermo Rodriguez 56463a08df First attempt at removing trip & updating hazmat -> lowlevel 2020-07-24 17:08:52 -04:00
Tyler Goodlet 7c73775474 Force keyword only args in actor spawn methods 2020-07-24 17:06:43 -04:00
Tyler Goodlet 8fbdfd6a3a Add an obnoxious error message on internal failures 2020-07-24 17:06:23 -04:00
Tyler Goodlet 1706791313 Drop entrypoints from `Actor` 2020-07-24 17:04:22 -04:00
Tyler Goodlet 8e32199509 Get entry points reorg without asyncio compat
This is an edit to factor out changes needed for the `asyncio` in guest mode
integration (which currently isn't tested well) so that later more pertinent
changes (which are tested well) can be rebased off of this branch and
merged into mainline sooner. The *infect_asyncio* branch will need to be
rebased onto this branch as well before merge to mainline.
2020-07-24 17:02:03 -04:00
Tyler Goodlet 8054bc7c70 Support "infected asyncio" actors
This is an initial solution for #120.

Allow spawning `asyncio` based actors which run `trio` in guest
mode. This enables spawning `tractor` actors on top of the `asyncio`
event loop whilst still leveraging the SC focused internal actor
supervision machinery. Add a `tractor.to_syncio.run()` api to allow
spawning tasks on the `asyncio` loop from an embedded (remote) `trio`
task and return or stream results all the way back through the `tractor`
IPC system using a very similar api to portals.

One outstanding problem is getting SC around calls to
`asyncio.create_task()`. Currently a task that crashes isn't able to
easily relay the error to the embedded `trio` task without us fully
enforcing the portals based message protocol (which seems superfluous
given the error ref is in process). Further experiments using `anyio`
task groups may alleviate this.
2020-07-24 16:48:06 -04:00
goodboy 2b2cf2e001
Merge pull request #110 from goodboy/init_sphinx_docs
Initial sphinx docs
2020-02-10 18:07:37 -06:00
Tyler Goodlet d62610c44e Search for guard and strip instead of hardcoding 2020-02-10 12:59:44 -05:00
Tyler Goodlet cfc97c4204 Set correct master doc name 2020-02-10 12:26:19 -05:00
Tyler Goodlet 3dcdc9181e Include our `__main__.py` script ex for windows 2020-02-10 12:22:14 -05:00
Tyler Goodlet 20f9ccfa9e Move two more examples out of docs for testing 2020-02-10 12:14:16 -05:00
Tyler Goodlet 63bcd99323 Only error the exs test when "Error" in last line of output 2020-02-10 12:14:16 -05:00
Tyler Goodlet 5a19826bd3 Drop sphinx toctree from readme 2020-02-10 12:14:16 -05:00
Tyler Goodlet 802f47b4ca Drop uneeded import 2020-02-10 12:14:16 -05:00
Tyler Goodlet 03d07cb12a Mirror readme off docs intro 2020-02-10 12:14:16 -05:00
Tyler Goodlet cd06298476 Simplify and re-org the intro section 2020-02-10 12:14:16 -05:00
Tyler Goodlet d6abfa774a Drop toc from sidebar 2020-02-10 12:14:16 -05:00
Tyler Goodlet 66b803780f Replace examples with ..literalinclude directives
This should address both #98 and #108 by using our now tested examples
scripts directly in the documentation (so we know they must work or CI
will fail).

Resolves #98 #108
2020-02-10 12:14:16 -05:00
Tyler Goodlet 5d2fd0eb05 Remove duplicate docs from readme 2020-02-10 12:14:16 -05:00
Tyler Goodlet 6e7d57c01d Add initial sphinx docs draft 2020-02-10 12:14:16 -05:00
goodboy 3b3d563ac9
Merge pull request #102 from goodboy/example_tests
Test docs examples as scripts
2020-02-10 11:13:32 -06:00
Tyler Goodlet f2030a2714 Better document the window's gotcha solution in test code 2020-02-09 14:59:22 -05:00
Tyler Goodlet 7880934505 Add tests for all docs examples
Parametrize our docs example test to include all (now fixed) examples
from the `READ.rst`. The examples themselves have been fixed/corrected
to run but they haven't yet been updated in the actual docs. Once #99
lands these example scripts will be directly included in our
documentation so there will be no possibility of presenting incorrect
examples to our users! This technically fixes #108 even though the new
example aren't going to be included directly in our docs until #99
lands.
2020-02-09 02:01:39 -05:00
Tyler Goodlet 30f8dd8be4 Pass a `Channel` to `LocalPortal` for compat purposes 2020-02-09 01:59:39 -05:00
Tyler Goodlet 9fb05d8849 Drop uneeded import 2020-02-09 01:07:14 -05:00
Tyler Goodlet 596aca8097 Alias __mp_main__ at import time 2020-02-09 01:07:14 -05:00
Tyler Goodlet 70636a98f6 Use the windows "gotchcas" fix for example tests
Apply the fix from @chrizzFTD where we invoke the entry point using
module exec mode on a ``__main__.py`` and import the
``test_example::`main()` from within that entry point script.
2020-02-09 01:07:07 -05:00
Tyler Goodlet 00fc734580 Fix missing `_ctx` define when on Windows 2020-02-07 20:01:41 -05:00
Tyler Goodlet c6f3ab5ae2 Initial examples testing attempt
A per #98 we need tests for examples from the docs as they would be run
by a user copy and pasting the code. This adds a small system for loading
examples from an "examples/" directory and executing them in
a subprocess while checking the output. We can use this to also verify
end-to-end expected logging output on std streams (ex. logging on
stderr).

To expand this further we can parameterize the test list using the
contents of the examples directory instead of hardcoding the script
names as I've done here initially.

Also, fix up the current readme examples to have the required/proper `if
__name__ == '__main__'` script guard.
2020-02-07 19:36:52 -05:00
goodboy 4bd3a14a68
Merge pull request #106 from goodboy/fix_examples_in_docs
Add script "guards" to docs examples
2020-02-07 10:09:38 -06:00
Tyler Goodlet d8daa57a33 Add script "guards" to docs examples
This was originally bundled in #102 but the windows CI there has blocked
that from landing quickly. These examples need to be fixed stat since
I've had at least a couple people notice it now when first trying out
the project.
2020-02-07 00:23:48 -05:00
goodboy 5741bd5209
Merge pull request #95 from goodboy/try_trip
Support trio-run-in-process as process spawning backend
2020-01-31 14:30:54 -06:00
Tyler Goodlet e671cb4f3b Fixup _spawn.py comments to incorporate trip 2020-01-31 12:05:15 -05:00
Tyler Goodlet 8264b7d136 Drop old module loading from abspath cruft 2020-01-31 12:04:46 -05:00
Tyler Goodlet ee4b014092 Fix typo 2020-01-31 12:04:13 -05:00
Tyler Goodlet d64508e1a6 Add more detailed docs around nursery logic
The logic in the `ActorNursery` block is critical to cancellation
semantics and in particular, understanding how supervisor strategies are
invoked. Stick in a bunch of explanatory comments to clear up these
details and also prepare to introduce more supervisor strats besides
the current one-cancels-all approach.
2020-01-31 09:50:25 -05:00
Tyler Goodlet 6348121d23 Do __main__ fixups like ``mulitprocessing does``
Instead of hackery trying to map modules manually from the filesystem
let Python do all the work by simply copying what ``multiprocessing``
does to "fixup the __main__ module" in spawned subprocesses. The new
private module ``_mp_fixup_main.py`` is simply cherry picked code from
``multiprocessing.spawn`` which does just that. We only need these
"fixups" when using a backend other then ``multiprocessing``; for
now just when using ``trio_run_in_process``.
2020-01-29 21:14:48 -05:00
Tyler Goodlet 2a4307975d Fix that thing where the first example in your docs is supposed to work
Thanks to @salotz for pointing out that the first example in the docs
was broken. Though it's somewhat embarrassing this might also explain
the problem in #79 and certain issues in #59...

The solution here is to import the target RPC module using the its
unique basename and absolute filepath in the sub-actor that requires it.
Special handling for `__main__` and `__mp_main__` is needed since the
spawned subprocess will have no knowledge about these parent-
-state-specific module variables. Solution: map the modules name to the
respective module file basename in the child process since the module
variables will of course have different values in children.
2020-01-29 12:16:14 -05:00
Tyler Goodlet 7feef44798 Document available process spawning backends 2020-01-27 16:03:51 -05:00
Tyler Goodlet 43cca122f5 Handle windows in `@tractor_test` as well 2020-01-26 23:44:47 -05:00
Tyler Goodlet a6b249cd52 Forkserver just can't seem to cut it... 2020-01-26 23:17:06 -05:00
Tyler Goodlet 5fd38d4618 Force `mp` backend if option is blank? 2020-01-26 23:16:43 -05:00
Tyler Goodlet b4cb7439a1 Drop useless fork error branch 2020-01-26 22:46:48 -05:00
Tyler Goodlet e57811a602 Fork isn't present on windows... 2020-01-26 22:35:42 -05:00
Tyler Goodlet 7c1bc1fce4 Make windows job names explicit 2020-01-26 22:17:38 -05:00
Tyler Goodlet e18fec9b17 Always force mp backend on Windows 2020-01-26 22:09:06 -05:00
Tyler Goodlet 87948bde3d Add per backend test runs for each Python version 2020-01-26 21:50:03 -05:00
Tyler Goodlet ecced3d09a Allow choosing the spawn backend per test session
Add a `--spawn-backend` option which can be set to one of {'mp',
'trio_run_in_process'} which will either run the test suite using the
`multiprocessing` or `trio-run-in-process` backend respectively.
Currently trying to run both in the same session can result in hangs
seemingly due to a lack of cleanup of forkservers / resource trackers
from `multiprocessing` which cause broken pipe errors on occasion (no
idea on the details).

For `test_cancellation.py::test_nested_multierrors`, use less nesting
when mp is used since it breaks if we push it too hard with the
whole recursive subprocess spawning thing...
2020-01-26 21:36:08 -05:00
Tyler Goodlet 27c9760f96 Be explicit about the spawning backend default
Set `trio-run-in-process` as the default on *nix systems and
`multiprocessing`'s spawn method on Windows. Enable overriding the
default choice using `tractor._spawn.try_set_start_method()`. Allows
for easy runs of the test suite using a user chosen backend.
2020-01-26 21:13:29 -05:00
Tyler Goodlet 783fe53b06 Don't mix trip with multiprocessing for now
It seems that mixing the two backends in the test suite results in hangs
due to lingering forkservers and resource managers from
`multiprocessing`? Likely we'll need either 2 separate CI runs to work
or someway to be sure that these lingering servers are killed in between
tests.
2020-01-24 00:55:40 -05:00
Tyler Goodlet bc259b7eab Use trip as default in all tests for now 2020-01-24 00:54:19 -05:00
Tyler Goodlet d9803ca906 Be explicit with the real name for trip 2020-01-24 00:47:01 -05:00
Tyler Goodlet 4837595e36 Fake out mypy again 2020-01-23 01:32:02 -05:00
Tyler Goodlet 4c5a60d06a Don't import trip on Windows 2020-01-23 01:23:26 -05:00
Tyler Goodlet 44996fe328 Add trip to start_method parametrizations 2020-01-23 01:16:10 -05:00
Tyler Goodlet ddbf55768f Try out trip as the default spawn_method on unix for now 2020-01-23 01:15:46 -05:00
Tyler Goodlet f1a96c1680 Add mypy.ini lel 2020-01-21 15:28:12 -05:00
Tyler Goodlet 4b0554b61f Type checker fixes 2020-01-21 10:28:32 -05:00
Tyler Goodlet e1a55a6f4f Importing happens once locally now so expect a local error 2020-01-21 10:28:32 -05:00
Tyler Goodlet 3c86aa2ab5 Add trio-run-in-process` as dep 2020-01-21 10:28:32 -05:00
Tyler Goodlet 6c45416016 Drop ActorNusery.wait(); it's no longer necessary really 2020-01-21 10:27:53 -05:00
Tyler Goodlet c074aea030 Support TRIP for process launching
This took a ton of tinkering and a rework of the actor nursery tear down
logic. The main changes include:

- each subprocess is now spawned from inside a trio task
from one of two containing nurseries created in the body of
`tractor.open_nursery()`: one for `run_in_actor()` processes and one for
`start_actor()` "daemons". This is to address the need for
`trio-run-in_process.open_in_process()` opening a nursery which must
be closed from the same task that opened it. Using this same approach
for `multiprocessing` seems to work well. The nurseries are waited in
order (rip actors then daemon actors) during tear down which allows
for avoiding the recursive re-entry of `ActorNursery.wait()` handled
prior.

- pull out all the nested functions / closures that were in
`ActorNursery.wait()` and move into the `_spawn` module such that
that process shutdown logic takes place in each containing task's
code path. This allows for vastly simplifying `.wait()` to just contain an
event trigger which initiates process waiting / result collection.
Likely `.wait()` should just be removed since it can no longer be used
to synchronously wait on the actor nursery.

- drop `ActorNursery.__aenter__()` / `.__atexit__()` and move this
"supervisor" tear down logic into the closing block of `open_nursery()`.
This not only cleans makes the code more comprehensible it also
makes our nursery implementation look more like the one in `trio`.

Resolves #93
2020-01-21 10:27:53 -05:00
Tyler Goodlet 91c3716968 Do module abspath loading in actor init 2020-01-21 10:27:53 -05:00
Tyler Goodlet afa640dcab More trip WIP stuff working.. kinda
Get a few more things working:
- fail reliably when remote module loading goes awry
- do a real hacky job of module loading using `sys.path` stuffsies
- we're still totally borked when trying to spin up and quickly cancel
a bunch of subactors...

It's a small move forward I guess.
2020-01-21 10:27:53 -05:00
Tyler Goodlet 1b7cdfe512 WIP trying out trio_run_in_process 2020-01-21 10:27:53 -05:00
goodboy 7c0efce84b
Merge pull request #94 from goodboy/log_task_context
Well...after enough `# type: ignore`s `mypy` is happy, and after enough clicking of *rerun build* the windows CI passed so I think this is prolly gtg peeps!
2020-01-15 21:58:11 -05:00
Tyler Goodlet 698951c515 More mypy apeasement on 3.7 2020-01-15 21:06:13 -05:00
Tyler Goodlet e2c9477122 Allow overriding the root logger name
Handy if other dependent projects want to use the logging system but
also want to slap their own root "branding" onto the record prefix.
2019-12-20 16:37:17 -05:00
Tyler Goodlet 79c152fe38 Make latest mpypy happy 2019-12-10 00:55:03 -05:00
Tyler Goodlet 7947eeebff Use trio_typing stubs 2019-12-09 22:56:13 -05:00
Tyler Goodlet 14bfef0df7 Update types for log adapter 2019-12-09 22:10:15 -05:00
Tyler Goodlet cf73283586 Make info object a mapping type
Make the info object a `Mapping` to play nicer with static type
checking. Simplify the task or actor context method lookup using a dict.
2019-12-09 00:03:22 -05:00
Tyler Goodlet 52efbfc2cd Log task and actor names where possible
Prepend the actor and task names in each log emission. This makes
debugging much more sane since you can see from which process and
running task the log message originates from!

Resolves #13
2019-12-01 23:26:25 -05:00
goodboy 8d2a05e788
Merge pull request #92 from goodboy/drop_event_clear
Drop use of `trio.Event.clear()`
2019-11-25 13:52:20 -05:00
Tyler Goodlet 915bf17a9a Add process tree depth control to nested multierror test
Another step toward having a complete test for #89.
Subactor breadth still seems to cause the most havoc and is why I've
kept that value to just 2 for now.
2019-11-25 12:05:15 -05:00
Tyler Goodlet d2a01e8b81 Drop use of `trio.Event.clear()`
Just spin up new events instead; because apparently they're
so cheap (rolls eyes).

Resolves #78
2019-11-23 11:29:23 -05:00
goodboy 4d43f2564c
Merge pull request #91 from goodboy/more_thorough_super_tests
More thorough basic supervision tests
2019-11-23 11:24:18 -05:00
Tyler Goodlet 2d4b6de4f4 Spawn even less suba-actors in Windows CI
Seems like we've probably got some greater limitations
with Windows and "nested" spawned sub-processes...
2019-11-22 21:23:25 -05:00
Tyler Goodlet f977d37cee Add nursery self-destruct logic on cancel failure
If a nursery fails to cancel (some sub-actors presumably) then hard kill
the whole process tree to avoid hangs during a catastrophic failure.
This logic may get factored out (and changed) as we introduce custom
supervisor strategies.
2019-11-22 17:11:48 -05:00
Tyler Goodlet 42978bf9ac Readme description bump after talks with multiple would-be users 2019-11-22 16:43:49 -05:00
Tyler Goodlet f8adbd73df Add windows and py3.8 support to setup script 2019-11-16 09:58:06 -05:00
Tyler Goodlet 5e056bae71 Expose trio exceptions to `RemoteActorError` 2019-10-30 00:32:10 -04:00
Tyler Goodlet 97df927714 Run first example test under both start methods 2019-10-30 00:31:28 -04:00
Tyler Goodlet 6d9ac53bd5 Add nested multierror testing
Add a test to verify that `trio.MultiError`s are properly propagated up
a simple actor nursery tree. We don't have any exception marshalling
between processes (yet) so we can't validate much more then a simple
2-depth tree. This satisfies the final bullet in #43.

Note I've limited the number of subactors per layer to around 5 since
any more then this seems to break the `multiprocessing` forkserver;
zombie subprocesses seem to be blocking teardown somehow...

Also add a single depth fast fail test just to verify that it's the
nested spawning that triggers this forkserver bug.
2019-10-30 00:30:40 -04:00
Tyler Goodlet 95e8f3d306 Propagate `trio.MultiError`s up the actor tree
`trio.MultiError` isn't an `Exception` (derived instead from
`BaseException`) so we have to specially catch it in the task
invocation machinery and ship it upwards (like regular errors)
since nurseries running in sub-actors can raise them.
2019-10-28 00:47:06 -04:00
Tyler Goodlet d406383cd3 Add a preliminary nested subactor `MultiError` test
This exemplifies the undefined behaviour in #88 and begins to test for
the last bullet in #43.
2019-10-26 15:04:13 -04:00
Tyler Goodlet 6dbb3f7ae6 Extend cancellation tests
In an effort towards #43. This completes the first major bullet's worth of tests
described in that issue.
2019-10-26 09:55:07 -04:00
goodboy ab349cdb8d
Merge pull request #86 from goodboy/pip_ci_fix
Make pip a keener
2019-10-20 16:42:42 -04:00
Tyler Goodlet 1127e3b579 Make pip a keener 2019-10-20 16:24:01 -04:00
Tyler Goodlet c5074f5a60 Always upgrade pip before CI run 2019-10-20 14:06:28 -04:00
goodboy 07d54110c0
Merge pull request #82 from goodboy/windows_support
Windows and Python 3.8 support
2019-10-17 09:11:40 -04:00
Tyler Goodlet e0072f925d Add back a py3.7 run on windows 2019-10-16 21:31:09 -04:00
Tyler Goodlet 6ec9752f46 Slight slowdown on windows / py3.8? 2019-10-16 11:02:18 -04:00
Tyler Goodlet 5f11072442 Add more detailed Windows gotchas section
Fill out with the solution from #79 and move the section further down.
This should hopefully suffice what's left to fulfil for #59

Resolves #79
2019-10-16 09:47:58 -04:00
Tyler Goodlet 7e8b7091cb Check for proper SIGINT return code
They finally got https://bugs.python.org/issue1054041 in Python 3.8
2019-10-15 23:22:48 -04:00
Tyler Goodlet da4796749f Continue hacking the forkserver in Python 3.8
They got all fancy and added shared memory segment tracking and then
had to "generalize" the tracker name...hooray

Fixes #81
2019-10-15 22:37:47 -04:00
Tyler Goodlet 6ff32347bf Don't hardcode python version in path for Windows
Also add Python 3.8 testing on Linux.
2019-10-15 21:18:48 -04:00
goodboy 22b5c1c207
Merge pull request #75 from goodboy/rename_forkserver_mod
Rename override module
2019-10-15 01:05:45 -04:00
Tyler Goodlet 7da95a806d Rename override module 2019-10-14 12:58:10 -04:00
goodboy ee9a71f4bf
Merge pull request #76 from goodboy/user_update
User name and email bump
2019-04-28 11:28:38 -04:00
Tyler Goodlet 24a4d6df4b User name and email bump 2019-04-28 10:42:39 -04:00
goodboy f2b08b5565
Merge pull request #74 from goodboy/win_ci
Add windows CI using choco
2019-04-07 22:27:16 -04:00
Tyler Goodlet 5760bb1b7c Adjust test timeout/sync handling for windows 2019-03-31 15:34:44 -04:00
Tyler Goodlet 3af58d129d Add windows CI using choco
Resolves #62
2019-03-30 20:47:17 -04:00
goodboy e0f4894071
Merge pull request #73 from goodboy/stream_functions
Stream functions
2019-03-29 19:41:50 -04:00
Tyler Goodlet b965d20cba Add stream func tests 2019-03-29 19:10:56 -04:00
Tyler Goodlet f885b02c73 Validate stream functions at decorate time 2019-03-29 19:10:32 -04:00
Tyler Goodlet 5c0ae47cf5 Fix type annotation 2019-03-26 08:03:12 -04:00
Tyler Goodlet 096d211ed2 Document `@tractor.stream` 2019-03-25 22:11:42 -04:00
Tyler Goodlet e51f84af90 Require explicit marking of non async gen streaming funcs
Add `@tractor.stream` which must be used to denote non async generator
streaming functions which use the `tractor.Context` API to push values.
This enforces a more explicit denotation as well as allows enforcing the
declaration of the `ctx` argument in definitions.
2019-03-25 21:36:13 -04:00
Tyler Goodlet 2f773fc883 Reorg streaming section 2019-03-24 15:08:34 -04:00
Tyler Goodlet 4ee35038fb Move discovery functions to their own module 2019-03-24 11:37:11 -04:00
Tyler Goodlet 2aa6ffce60 Provide each task's cancel scope to every `Context`
This begins moving toward explicitly decorated "streaming functions"
instead of checking for a `ctx` arg in the signature.

- provide each context with its task's top level `trio.CancelScope`
  such that tasks can cancel themselves explictly if needed via calling
  `Context.cancel_scope()`
- make `Actor.cancel_task()` a private method (`_cancel_task()`) and
  handle remote rpc calls specially such that the caller does not need
  to provide the `chan` argument; non-primitive types can't be passed on
  the wire and we don't want the client actor be require knowledge of
  the channel instance the request is associated with. This also ties into
  how we're tracking tasks right now (`Actor._rpc_tasks` is keyed by the
  call id, a UUID, *plus* the channel).
- make `_do_handshake` a private actor method
- use UUID version 4
2019-03-23 23:31:26 -04:00
goodboy ac4a025aa5
Merge pull request #71 from goodboy/propagate_loglevel
Propagate `tractor.run()` logging level to subactors
2019-03-23 23:30:45 -04:00
Tyler Goodlet faa1f373b5 Add subactor loglevel propagation test
Can't seem to get the `capfd` fixture to capture subprocess logging to
stderr even though the console report shows the log message as being
captured? Skipping the test on the forkserver method for now.
2019-03-23 23:27:32 -04:00
Tyler Goodlet 4e078368fc Propagate `tractor.run()` logging level to subactors 2019-03-18 21:32:08 -04:00
Tyler Goodlet 4b825778dd Flip travis badge to new username 2019-03-17 15:18:44 -04:00
Tyler Goodlet de8d69c58b Expose `Context` at top level 2019-03-15 19:40:34 -04:00
goodboy 29ffbfe6ca
Merge pull request #63 from chrizzFTD/update_tests_for_windows
Update tests for windows
2019-03-14 21:06:37 -04:00
goodboy d042a99ecf
Merge pull request #70 from goodboy/ipc_iternals_renaming
Rename `StreamQueue` to `MsgpackStream`
2019-03-13 20:52:49 -04:00
Christian López Barrón 5fc51fd745 multi_program signal for windows missing SIGKILL, SIGINT 2019-03-13 21:32:45 +11:00
Christian López Barrón 2138d55a60 increased trio.sleep time for other actors to spawn 2019-03-13 21:32:45 +11:00
Christian López Barrón b992dc19e3 moved assert statement for name on try_set_start_method after its autoset 2019-03-13 21:32:45 +11:00
Christian López Barrón efffca371a pytest_generate_tests remove `fork` only if it's in list 2019-03-13 21:32:45 +11:00
Tyler Goodlet 63d067792c Rename `StreamQueue` to `MsgpackStream`
Prepares for other possible interchange formats plus it wasn't really
a queue, just a TCP stream wrapper + `msgpack` interchange.
2019-03-12 01:22:46 -04:00
tgoodlet 8c5337c5ca
Merge pull request #67 from tgoodlet/docs_example_fixes
Docs example fixes
2019-03-11 16:10:00 -04:00
tgoodlet ddf467acf5
Merge pull request #68 from tgoodlet/close_mem_chans
Use "clean channel shutdown" in streaming example
2019-03-11 16:09:00 -04:00
Tyler Goodlet 0b520c7bee Update streaming example in docs 2019-03-10 22:13:21 -04:00
Tyler Goodlet 9a780485dc Use "clean channel shutdown" in streaming example
Resolves #65
2019-03-10 22:08:50 -04:00
Tyler Goodlet 322145684b Pass an actor name to `main()` in discovery ex
Resolves #41
2019-03-10 15:59:59 -04:00
Tyler Goodlet e560322b9b Fix actor misnaming in 2nd spawning example
Resolves #64
2019-03-10 15:56:20 -04:00
goodboy c0276c85df
Merge pull request #61 from tgoodlet/spawn_method_support
Spawn method support
2019-03-08 20:11:40 -05:00
Tyler Goodlet b70f4eafcb Flip tests to use `start_method` kwarg 2019-03-08 20:06:16 -05:00
Tyler Goodlet c3daf73112 Document the mp start method more explicitly 2019-03-08 20:01:42 -05:00
Tyler Goodlet 8eb138b8a7 Add Windows *gotchas* section
Resolves #61
2019-03-07 18:28:22 -05:00
Tyler Goodlet 49b711fb5f Be more stingy about "actor model" 2019-03-06 22:57:27 -05:00
Tyler Goodlet dc5cc040e6 Try to support waiting on Windows processes
This pokes around a little in `trio` hazmat but it *should
work* as it piggy backs on the new cross platform subprocess support.

Relates to #59
2019-03-06 21:24:23 -05:00
Tyler Goodlet d6ca722bcc Sprinkle `spawn_method` fixture throughout tests 2019-03-06 00:37:02 -05:00
Tyler Goodlet 483ae42a46 Add a `spawn_method` dynamic fixture 2019-03-06 00:36:37 -05:00
Tyler Goodlet 7014a07986 Add "spawn" start method support
Add full support for using the "spawn" process starting method as per:
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

Add a  `spawn_method` argument to `tractor.run()` for specifying the
desired method explicitly. By default use the "fastest" method available.
On *nix systems this is the original "forkserver" method.

This should be the solution to getting windows support!

Resolves #60
2019-03-06 00:29:07 -05:00
Tyler Goodlet d75739e9c7 Factor process creation into a separate factory
Make a `_spawn` module for encapsulating all the `multiprocessing`
"spawn method" stuff and factor current forkserver steps into it.
2019-03-05 18:52:19 -05:00
goodboy a927966170
Merge pull request #56 from tgoodlet/trio_memchans
Use trio memory channels!
2019-02-20 21:24:47 -05:00
121 changed files with 18986 additions and 3141 deletions

131
.github/workflows/ci.yml vendored 100644
View File

@ -0,0 +1,131 @@
name: CI
on:
# any time someone pushes a new branch to origin
push:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
mypy:
name: 'MyPy'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Install dependencies
run: pip install -U . --upgrade-strategy eager -r requirements-test.txt
- name: Run MyPy check
run: mypy tractor/ --ignore-missing-imports --show-traceback
# test that we can generate a software distribution and install it
# thus avoid missing file issues after packaging.
sdist-linux:
name: 'sdist'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Build sdist
run: python setup.py sdist --formats=zip
- name: Install sdist from .zips
run: python -m pip install dist/*.zip
testing-linux:
name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
timeout-minutes: 10
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python: ['3.10']
spawn_backend: [
'trio',
'mp_spawn',
'mp_forkserver',
]
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '${{ matrix.python }}'
- name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager
- name: List dependencies
run: pip list
- name: Run tests
run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rsx
# We skip 3.10 on windows for now due to not having any collabs to
# debug the CI failures. Anyone wanting to hack and solve them is very
# welcome, but our primary user base is not using that OS.
# TODO: use job filtering to accomplish instead of repeated
# boilerplate as is above XD:
# - https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows
# - https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix
# - https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#jobsjob_idif
# testing-windows:
# name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
# timeout-minutes: 12
# runs-on: ${{ matrix.os }}
# strategy:
# fail-fast: false
# matrix:
# os: [windows-latest]
# python: ['3.10']
# spawn_backend: ['trio', 'mp']
# steps:
# - name: Checkout
# uses: actions/checkout@v2
# - name: Setup python
# uses: actions/setup-python@v2
# with:
# python-version: '${{ matrix.python }}'
# - name: Install dependencies
# run: pip install -U . -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager
# # TODO: pretty sure this solves debugger deps-issues on windows, but it needs to
# # be verified by someone with a native setup.
# # - name: Force pyreadline3
# # run: pip uninstall pyreadline; pip install -U pyreadline3
# - name: List dependencies
# run: pip list
# - name: Run tests
# run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rsx

View File

@ -1,16 +0,0 @@
language: python
matrix:
include:
# - python: 3.6
- python: 3.7
dist: xenial
sudo: required
install:
- cd $TRAVIS_BUILD_DIR
- pip install -U . -r requirements-test.txt
script:
- mypy tractor/ --ignore-missing-imports
- pytest tests/ --no-print-logs

147
LICENSE
View File

@ -1,23 +1,21 @@
GNU GENERAL PUBLIC LICENSE GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 29 June 2007 Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed. of this license document, but changing it is not allowed.
Preamble Preamble
The GNU General Public License is a free, copyleft license for The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works. software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast, to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the software for all its users.
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you price. Our General Public Licenses are designed to make sure that you
@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things. free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you Developers that use our General Public Licenses protect your rights
these rights or asking you to surrender the rights. Therefore, you have with two steps: (1) assert copyright on the software, and (2) offer
certain responsibilities if you distribute copies of the software, or if you this License which gives you legal permission to copy, distribute
you modify it: responsibilities to respect the freedom of others. and/or modify the software.
For example, if you distribute copies of such a program, whether A secondary benefit of defending all users' freedom is that
gratis or for a fee, you must pass on to the recipients the same improvements made in alternate versions of the program, if they
freedoms that you received. You must make sure that they, too, receive receive widespread use, become available for other developers to
or can get the source code. And you must show them these terms so they incorporate. Many developers of free software are heartened and
know their rights. encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Developers that use the GNU GPL protect your rights with two steps: The GNU Affero General Public License is designed specifically to
(1) assert copyright on the software, and (2) offer you this License ensure that, in such cases, the modified source code becomes available
giving you legal permission to copy, distribute and/or modify it. to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
For the developers' and authors' protection, the GPL clearly explains An older license, called the Affero General Public License and
that there is no warranty for this free software. For both users' and published by Affero, was designed to accomplish similar goals. This is
authors' sake, the GPL requires that modified versions be marked as a different license, not a version of the Affero GPL, but Affero has
changed, so that their problems will not be attributed erroneously to released a new version of the Affero GPL which permits relicensing under
authors of previous versions. this license.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and The precise terms and conditions for copying, distribution and
modification follow. modification follow.
@ -72,7 +60,7 @@ modification follow.
0. Definitions. 0. Definitions.
"This License" refers to version 3 of the GNU General Public License. "This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks. works, such as semiconductor masks.
@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program. License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License. 13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work, License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License, but the work with which it is combined will remain governed by version
section 13, concerning interaction through a network will apply to the 3 of the GNU General Public License.
combination as such.
14. Revised Versions of this License. 14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will the GNU Affero General Public License from time to time. Such new versions
be similar in spirit to the present version, but may differ in detail to will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns. address new problems or concerns.
Each version is given a distinguishing version number. If the Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation. by the Free Software Foundation.
If the Program specifies that a proxy can decide which future If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you public statement of acceptance of a version permanently authorizes you
to choose that version for the Program. to choose that version for the Program.
@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author> Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
(at your option) any later version. (at your option) any later version.
This program is distributed in the hope that it will be useful, This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details. GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail. Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short If your software can interact with users remotely through a computer
notice like this when it starts in an interactive mode: network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
<program> Copyright (C) <year> <name of author> interface could display a "Source" link that leads users to an archive
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. of the code. There are many ways you could offer source, and different
This is free software, and you are welcome to redistribute it solutions will be better for different programs; see section 13 for the
under certain conditions; type `show c' for details. specific requirements.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school, You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary. if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>. <https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

2
MANIFEST.in 100644
View File

@ -0,0 +1,2 @@
# https://packaging.python.org/en/latest/guides/using-manifest-in/#using-manifest-in
include docs/README.rst

528
NEWS.rst 100644
View File

@ -0,0 +1,528 @@
=========
Changelog
=========
.. towncrier release notes start
tractor 0.1.0a5 (2022-08-03)
============================
This is our final release supporting Python 3.9 since we will be moving
internals to the new `match:` syntax from 3.10 going forward and
further, we have officially dropped usage of the `msgpack` library and
happily adopted `msgspec`.
Features
--------
- `#165 <https://github.com/goodboy/tractor/issues/165>`_: Add SIGINT
protection to our `pdbpp` based debugger subystem such that for
(single-depth) actor trees in debug mode we ignore interrupts in any
actor currently holding the TTY lock thus avoiding clobbering IPC
connections and/or task and process state when working in the REPL.
As a big note currently so called "nested" actor trees (trees with
actors having more then one parent/ancestor) are not fully supported
since we don't yet have a mechanism to relay the debug mode knowledge
"up" the actor tree (for eg. when handling a crash in a leaf actor).
As such currently there is a set of tests and known scenarios which will
result in process cloberring by the zombie repaing machinery and these
have been documented in https://github.com/goodboy/tractor/issues/320.
The implementation details include:
- utilizing a custom SIGINT handler which we apply whenever an actor's
runtime enters the debug machinery, which we also make sure the
stdlib's `pdb` configuration doesn't override (which it does by
default without special instance config).
- litter the runtime with `maybe_wait_for_debugger()` mostly in spots
where the root actor should block before doing embedded nursery
teardown ops which both cancel potential-children-in-deubg as well
as eventually trigger zombie reaping machinery.
- hardening of the TTY locking semantics/API both in terms of IPC
terminations and cancellation and lock release determinism from
sync debugger instance methods.
- factoring of locking infrastructure into a new `._debug.Lock` global
which encapsulates all details of the ``trio`` sync primitives and
task/actor uid management and tracking.
We also add `ctrl-c` cases throughout the test suite though these are
disabled for py3.9 (`pdbpp` UX differences that don't seem worth
compensating for, especially since this will be our last 3.9 supported
release) and there are a slew of marked cases that aren't expected to
work in CI more generally (as mentioned in the "nested" tree note
above) despite seemingly working when run manually on linux.
- `#304 <https://github.com/goodboy/tractor/issues/304>`_: Add a new
``to_asyncio.LinkedTaskChannel.subscribe()`` which gives task-oriented
broadcast functionality semantically equivalent to
``tractor.MsgStream.subscribe()`` this makes it possible for multiple
``trio``-side tasks to consume ``asyncio``-side task msgs in tandem.
Further Improvements to the test suite were added in this patch set
including a new scenario test for a sub-actor managed "service nursery"
(implementing the basics of a "service manager") including use of
*infected asyncio* mode. Further we added a lower level
``test_trioisms.py`` to start to track issues we need to work around in
``trio`` itself which in this case included a bug we were trying to
solve related to https://github.com/python-trio/trio/issues/2258.
Bug Fixes
---------
- `#318 <https://github.com/goodboy/tractor/issues/318>`_: Fix
a previously undetected ``trio``-``asyncio`` task lifetime linking
issue with the ``to_asyncio.open_channel_from()`` api where both sides
where not properly waiting/signalling termination and it was possible
for ``asyncio``-side errors to not propagate due to a race condition.
The implementation fix summary is:
- add state to signal the end of the ``trio`` side task to be
read by the ``asyncio`` side and always cancel any ongoing
task in such cases.
- always wait on the ``asyncio`` task termination from the ``trio``
side on error before maybe raising said error.
- always close the ``trio`` mem chan on exit to ensure the other
side can detect it and follow.
Trivial/Internal Changes
------------------------
- `#248 <https://github.com/goodboy/tractor/issues/248>`_: Adjust the
`tractor._spawn.soft_wait()` strategy to avoid sending an actor cancel
request (via `Portal.cancel_actor()`) if either the child process is
detected as having terminated or the IPC channel is detected to be
closed.
This ensures (even) more deterministic inter-actor cancellation by
avoiding the timeout condition where possible when a whild never
sucessfully spawned, crashed, or became un-contactable over IPC.
- `#295 <https://github.com/goodboy/tractor/issues/295>`_: Add an
experimental ``tractor.msg.NamespacePath`` type for passing Python
objects by "reference" through a ``str``-subtype message and using the
new ``pkgutil.resolve_name()`` for reference loading.
- `#298 <https://github.com/goodboy/tractor/issues/298>`_: Add a new
`tractor.experimental` subpackage for staging new high level APIs and
subystems that we might eventually make built-ins.
- `#300 <https://github.com/goodboy/tractor/issues/300>`_: Update to and
pin latest ``msgpack`` (1.0.3) and ``msgspec`` (0.4.0) both of which
required adjustments for backwards imcompatible API tweaks.
- `#303 <https://github.com/goodboy/tractor/issues/303>`_: Fence off
``multiprocessing`` imports until absolutely necessary in an effort to
avoid "resource tracker" spawning side effects that seem to have
varying degrees of unreliability per Python release. Port to new
``msgspec.DecodeError``.
- `#305 <https://github.com/goodboy/tractor/issues/305>`_: Add
``tractor.query_actor()`` an addr looker-upper which doesn't deliver
a ``Portal`` instance and instead just a socket address ``tuple``.
Sometimes it's handy to just have a simple way to figure out if
a "service" actor is up, so add this discovery helper for that. We'll
prolly just leave it undocumented for now until we figure out
a longer-term/better discovery system.
- `#316 <https://github.com/goodboy/tractor/issues/316>`_: Run windows
CI jobs on python 3.10 after some hacks for ``pdbpp`` dependency
issues.
Issue was to do with the now deprecated `pyreadline` project which
should be changed over to `pyreadline3`.
- `#317 <https://github.com/goodboy/tractor/issues/317>`_: Drop use of
the ``msgpack`` package and instead move fully to the ``msgspec``
codec library.
We've now used ``msgspec`` extensively in production and there's no
reason to not use it as default. Further this change preps us for the up
and coming typed messaging semantics (#196), dialog-unprotocol system
(#297), and caps-based messaging-protocols (#299) planned before our
first beta.
tractor 0.1.0a4 (2021-12-18)
============================
Features
--------
- `#275 <https://github.com/goodboy/tractor/issues/275>`_: Re-license
code base under AGPLv3. Also see `#274
<https://github.com/goodboy/tractor/pull/274>`_ for majority
contributor consensus on this decision.
- `#121 <https://github.com/goodboy/tractor/issues/121>`_: Add
"infected ``asyncio`` mode; a sub-system to spawn and control
``asyncio`` actors using ``trio``'s guest-mode.
This gets us the following very interesting functionality:
- ability to spawn an actor that has a process entry point of
``asyncio.run()`` by passing ``infect_asyncio=True`` to
``Portal.start_actor()`` (and friends).
- the ``asyncio`` actor embeds ``trio`` using guest-mode and starts
a main ``trio`` task which runs the ``tractor.Actor._async_main()``
entry point engages all the normal ``tractor`` runtime IPC/messaging
machinery; for all purposes the actor is now running normally on
a ``trio.run()``.
- the actor can now make one-to-one task spawning requests to the
underlying ``asyncio`` event loop using either of:
* ``to_asyncio.run_task()`` to spawn and run an ``asyncio`` task to
completion and block until a return value is delivered.
* ``async with to_asyncio.open_channel_from():`` which spawns a task
and hands it a pair of "memory channels" to allow for bi-directional
streaming between the now SC-linked ``trio`` and ``asyncio`` tasks.
The output from any call(s) to ``asyncio`` can be handled as normal in
``trio``/``tractor`` task operation with the caveat of the overhead due
to guest-mode use.
For more details see the `original PR
<https://github.com/goodboy/tractor/pull/121>`_ and `issue
<https://github.com/goodboy/tractor/issues/120>`_.
- `#257 <https://github.com/goodboy/tractor/issues/257>`_: Add
``trionics.maybe_open_context()`` an actor-scoped async multi-task
context manager resource caching API.
Adds an SC-safe cacheing async context manager api that only enters on
the *first* task entry and only exits on the *last* task exit while in
between delivering the same cached value per input key. Keys can be
either an explicit ``key`` named arg provided by the user or a
hashable ``kwargs`` dict (will be converted to a ``list[tuple]``) which
is passed to the underlying manager function as input.
- `#261 <https://github.com/goodboy/tractor/issues/261>`_: Add
cross-actor-task ``Context`` oriented error relay, a new stream
overrun error-signal ``StreamOverrun``, and support disabling
``MsgStream`` backpressure as the default before a stream is opened or
by choice of the user.
We added stricter semantics around ``tractor.Context.open_stream():``
particularly to do with streams which are only opened at one end.
Previously, if only one end opened a stream there was no way for that
sender to know if msgs are being received until first, the feeder mem
chan on the receiver side hit a backpressure state and then that
condition delayed its msg loop processing task to eventually create
backpressure on the associated IPC transport. This is non-ideal in the
case where the receiver side never opened a stream by mistake since it
results in silent block of the sender and no adherence to the underlying
mem chan buffer size settings (which is still unsolved btw).
To solve this we add non-backpressure style message pushing inside
``Actor._push_result()`` by default and only use the backpressure
``trio.MemorySendChannel.send()`` call **iff** the local end of the
context has entered ``Context.open_stream():``. This way if the stream
was never opened but the mem chan is overrun, we relay back to the
sender a (new exception) ``SteamOverrun`` error which is raised in the
sender's scope with a special error message about the stream never
having been opened. Further, this behaviour (non-backpressure style
where senders can expect an error on overruns) can now be enabled with
``.open_stream(backpressure=False)`` and the underlying mem chan size
can be specified with a kwarg ``msg_buffer_size: int``.
Further bug fixes and enhancements in this changeset include:
- fix a race we were ignoring where if the callee task opened a context
it could enter ``Context.open_stream()`` before calling
``.started()``.
- Disallow calling ``Context.started()`` more then once.
- Enable ``Context`` linked tasks error relaying via the new
``Context._maybe_raise_from_remote_msg()`` which (for now) uses
a simple ``trio.Nursery.start_soon()`` to raise the error via closure
in the local scope.
- `#267 <https://github.com/goodboy/tractor/issues/267>`_: This
(finally) adds fully acknowledged remote cancellation messaging
support for both explicit ``Portal.cancel_actor()`` calls as well as
when there is a "runtime-wide" cancellations (eg. during KBI or
general actor nursery exception handling which causes a full actor
"crash"/termination).
You can think of this as the most ideal case in 2-generals where the
actor requesting the cancel of its child is able to always receive back
the ACK to that request. This leads to a more deterministic shutdown of
the child where the parent is able to wait for the child to fully
respond to the request. On a localhost setup, where the parent can
monitor the state of the child through process or other OS APIs instead
of solely through IPC messaging, the parent can know whether or not the
child decided to cancel with more certainty. In the case of separate
hosts, we still rely on a simple timeout approach until such a time
where we prefer to get "fancier".
- `#271 <https://github.com/goodboy/tractor/issues/271>`_: Add a per
actor ``debug_mode: bool`` control to our nursery.
This allows spawning actors via ``ActorNursery.start_actor()`` (and
other dependent methods) with a ``debug_mode=True`` flag much like
``tractor.open_nursery():`` such that per process crash handling
can be toggled for cases where a user does not need/want all child actors
to drop into the debugger on error. This is often useful when you have
actor-tasks which are expected to error often (and be re-run) but want
to specifically interact with some (problematic) child.
Bugfixes
--------
- `#239 <https://github.com/goodboy/tractor/issues/239>`_: Fix
keyboard interrupt handling in ``Portal.open_context()`` blocks.
Previously this was not triggering cancellation of the remote task
context and could result in hangs if a stream was also opened. This
fix is to accept `BaseException` since it is likely any other top
level exception other then KBI (even though not expected) should also
get this result.
- `#264 <https://github.com/goodboy/tractor/issues/264>`_: Fix
``Portal.run_in_actor()`` returns ``None`` result.
``None`` was being used as the cached result flag and obviously breaks
on a ``None`` returned from the remote target task. This would cause an
infinite hang if user code ever called ``Portal.result()`` *before* the
nursery exit. The simple fix is to use the *return message* as the
initial "no-result-received-yet" flag value and, once received, the
return value is read from the message to avoid the cache logic error.
- `#266 <https://github.com/goodboy/tractor/issues/266>`_: Fix
graceful cancellation of daemon actors
Previously, his was a bug where if the soft wait on a sub-process (the
``await .proc.wait()``) in the reaper task teardown was cancelled we
would fail over to the hard reaping sequence (meant for culling off any
potential zombies via system kill signals). The hard reap has a timeout
of 3s (currently though in theory we could make it shorter?) before
system signalling kicks in. This means that any daemon actor still
running during nursery exit would get hard reaped (3s later) instead of
cancelled via IPC message. Now we catch the ``trio.Cancelled``, call
``Portal.cancel_actor()`` on the daemon and expect the child to
self-terminate after the runtime cancels and shuts down the process.
- `#278 <https://github.com/goodboy/tractor/issues/278>`_: Repair
inter-actor stream closure semantics to work correctly with
``tractor.trionics.BroadcastReceiver`` task fan out usage.
A set of previously unknown bugs discovered in `#257
<https://github.com/goodboy/tractor/pull/257>`_ let graceful stream
closure result in hanging consumer tasks that use the broadcast APIs.
This adds better internal closure state tracking to the broadcast
receiver and message stream APIs and in particular ensures that when an
underlying stream/receive-channel (a broadcast receiver is receiving
from) is closed, all consumer tasks waiting on that underlying channel
are woken so they can receive the ``trio.EndOfChannel`` signal and
promptly terminate.
tractor 0.1.0a3 (2021-11-02)
============================
Features
--------
- Switch to using the ``trio`` process spawner by default on windows. (#166)
This gets windows users debugger support (manually tested) and in
general a more resilient (nested) actor tree implementation.
- Add optional `msgspec <https://jcristharif.com/msgspec/>`_ support
as an alernative, faster MessagePack codec. (#214)
Provides us with a path toward supporting typed IPC message contracts. Further,
``msgspec`` structs may be a valid tool to start for formalizing our
"SC dialog un-protocol" messages as described in `#36
<https://github.com/goodboy/tractor/issues/36>`_.
- Introduce a new ``tractor.trionics`` `sub-package`_ that exposes
a selection of our relevant high(er) level trio primitives and
goodies. (#241)
At outset we offer a ``gather_contexts()`` context manager for
concurrently entering a sequence of async context managers (much like
a version of ``asyncio.gather()`` but for context managers) and use it
in a new ``tractor.open_actor_cluster()`` manager-helper that can be
entered to concurrently spawn a flat actor pool. We also now publicly
expose our "broadcast channel" APIs (``open_broadcast_receiver()``)
from here.
.. _sub-package: ../tractor/trionics
- Change the core message loop to handle task and actor-runtime cancel
requests immediately instead of scheduling them as is done for rpc-task
requests. (#245)
In order to obtain more reliable teardown mechanics for (complex) actor
trees it's important that we specially treat cancel requests as having
higher priority. Previously, it was possible that task cancel requests
could actually also themselves be cancelled if a "actor-runtime" cancel
request was received (can happen during messy multi actor crashes that
propagate). Instead cancels now block the msg loop until serviced and
a response is relayed back to the requester. This also allows for
improved debugger support since we have determinism guarantees about
which processes must wait before hard killing their children.
- (`#248 <https://github.com/goodboy/tractor/pull/248>`_) Drop Python
3.8 support in favour of rolling with two latest releases for the time
being.
Misc
----
- (`#243 <https://github.com/goodboy/tractor/pull/243>`_) add a distinct
``'CANCEL'`` log level to allow the runtime to emit details about
cancellation machinery statuses.
tractor 0.1.0a2 (2021-09-07)
============================
Features
--------
- Add `tokio-style broadcast channels
<https://docs.rs/tokio/1.11.0/tokio/sync/broadcast/index.html>`_ as
a solution for `#204 <https://github.com/goodboy/tractor/pull/204>`_ and
discussed thoroughly in `trio/#987
<https://github.com/python-trio/trio/issues/987>`_.
This gives us local task broadcast functionality using a new
``BroadcastReceiver`` type which can wrap ``trio.ReceiveChannel`` and
provide fan-out copies of a stream of data to every subscribed consumer.
We use this new machinery to provide a ``ReceiveMsgStream.subscribe()``
async context manager which can be used by actor-local concumers tasks
to easily pull from a shared and dynamic IPC stream. (`#229
<https://github.com/goodboy/tractor/pull/229>`_)
Bugfixes
--------
- Handle broken channel/stream faults where the root's tty lock is left
acquired by some child actor who went MIA and the root ends up hanging
indefinitely. (`#234 <https://github.com/goodboy/tractor/pull/234>`_)
There's two parts here: we no longer shield wait on the lock and,
now always do our best to release the lock on the expected worst
case connection faults.
Deprecations and Removals
-------------------------
- Drop stream "shielding" support which was originally added to sidestep
a cancelled call to ``.receive()``
In the original api design a stream instance was returned directly from
a call to ``Portal.run()`` and thus there was no "exit phase" to handle
cancellations and errors which would trigger implicit closure. Now that
we have said enter/exit semantics with ``Portal.open_stream_from()`` and
``Context.open_stream()`` we can drop this implicit (and arguably
confusing) behavior. (`#230 <https://github.com/goodboy/tractor/pull/230>`_)
- Drop Python 3.7 support in preparation for supporting 3.9+ syntax.
(`#232 <https://github.com/goodboy/tractor/pull/232>`_)
tractor 0.1.0a1 (2021-08-01)
============================
Features
--------
- Updated our uni-directional streaming API (`#206
<https://github.com/goodboy/tractor/pull/206>`_) to require a context
manager style ``async with Portal.open_stream_from(target) as stream:``
which explicitly determines when to stop a stream in the calling (aka
portal opening) actor much like ``async_generator.aclosing()``
enforcement.
- Improved the ``multiprocessing`` backend sub-actor reaping (`#208
<https://github.com/goodboy/tractor/pull/208>`_) during actor nursery
exit, particularly during cancellation scenarios that previously might
result in hard to debug hangs.
- Added initial bi-directional streaming support in `#219
<https://github.com/goodboy/tractor/pull/219>`_ with follow up debugger
improvements via `#220 <https://github.com/goodboy/tractor/pull/220>`_
using the new ``tractor.Context`` cross-actor task syncing system.
The debugger upgrades add an edge triggered last-in-tty-lock semaphore
which allows the root process for a tree to avoid clobbering children
who have queued to acquire the ``pdb`` repl by waiting to cancel
sub-actors until the lock is known to be released **and** has no
pending waiters.
Experiments and WIPs
--------------------
- Initial optional ``msgspec`` serialization support in `#214
<https://github.com/goodboy/tractor/pull/214>`_ which should hopefully
land by next release.
- Improved "infect ``asyncio``" cross-loop task cancellation and error
propagation by vastly simplifying the cross-loop-task streaming approach.
We may end up just going with a use of ``anyio`` in the medium term to
avoid re-doing work done by their cross-event-loop portals. See the
``infect_asyncio`` for details.
Improved Documentation
----------------------
- `Updated our readme <https://github.com/goodboy/tractor/pull/211>`_ to
include more (and better) `examples
<https://github.com/goodboy/tractor#run-a-func-in-a-process>`_ (with
matching multi-terminal process monitoring shell commands) as well as
added many more examples to the `repo set
<https://github.com/goodboy/tractor/tree/master/examples>`_.
- Added a readme `"actors under the hood" section
<https://github.com/goodboy/tractor#under-the-hood>`_ in an effort to
guard against suggestions for changing the API away from ``trio``'s
*tasks-as-functions* style.
- Moved to using the `sphinx book theme
<https://sphinx-book-theme.readthedocs.io/en/latest/index.html>`_
though it needs some heavy tweaking and doesn't seem to show our logo
on rtd :(
Trivial/Internal Changes
------------------------
- Added a new ``TransportClosed`` internal exception/signal (`#215
<https://github.com/goodboy/tractor/pull/215>`_ for catching TCP
channel gentle closes instead of silently falling through the message
handler loop via an async generator ``return``.
Deprecations and Removals
-------------------------
- Dropped support for invoking sync functions (`#205
<https://github.com/goodboy/tractor/pull/205>`_) in other
actors/processes since you can always wrap a sync function from an
async one. Users can instead consider using ``trio-parallel`` which
is a project specifically geared for purely synchronous calls in
sub-processes.
- Deprecated our ``tractor.run()`` entrypoint `#197
<https://github.com/goodboy/tractor/pull/197>`_; the runtime is now
either started implicitly in first actor nursery use or via an
explicit call to ``tractor.open_root_actor()``. Full removal of
``tractor.run()`` will come by beta release.
tractor 0.1.0a0 (2021-02-28)
============================
..
TODO: fill out more of the details of the initial feature set in some TLDR form
Summary
-------
- ``trio`` based process spawner (using ``subprocess``)
- initial multi-process debugging with ``pdb++``
- windows support using both ``trio`` and ``multiprocessing`` spawners
- "portal" api for cross-process, structured concurrent, (streaming) IPC

20
docs/Makefile 100644
View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

632
docs/README.rst 100644
View File

@ -0,0 +1,632 @@
|logo| ``tractor``: next-gen Python parallelism
|gh_actions|
|docs|
``tractor`` is a `structured concurrent`_, multi-processing_ runtime
built on trio_.
Fundamentally, ``tractor`` gives you parallelism via
``trio``-"*actors*": independent Python processes (aka
non-shared-memory threads) which maintain structured
concurrency (SC) *end-to-end* inside a *supervision tree*.
Cross-process (and thus cross-host) SC is accomplished through the
combined use of our "actor nurseries_" and an "SC-transitive IPC
protocol" constructed on top of multiple Pythons each running a ``trio``
scheduled runtime - a call to ``trio.run()``.
We believe the system adheres to the `3 axioms`_ of an "`actor model`_"
but likely *does not* look like what *you* probably think an "actor
model" looks like, and that's *intentional*.
The first step to grok ``tractor`` is to get the basics of ``trio`` down.
A great place to start is the `trio docs`_ and this `blog post`_.
Features
--------
- **It's just** a ``trio`` API
- *Infinitely nesteable* process trees
- Builtin IPC streaming APIs with task fan-out broadcasting
- A "native" multi-core debugger REPL using `pdbp`_ (a fork & fix of
`pdb++`_ thanks to @mdmintz!)
- Support for a swappable, OS specific, process spawning layer
- A modular transport stack, allowing for custom serialization (eg. with
`msgspec`_), communications protocols, and environment specific IPC
primitives
- Support for spawning process-level-SC, inter-loop one-to-one-task oriented
``asyncio`` actors via "infected ``asyncio``" mode
- `structured chadcurrency`_ from the ground up
Run a func in a process
-----------------------
Use ``trio``'s style of focussing on *tasks as functions*:
.. code:: python
"""
Run with a process monitor from a terminal using::
$TERM -e watch -n 0.1 "pstree -a $$" \
& python examples/parallelism/single_func.py \
&& kill $!
"""
import os
import tractor
import trio
async def burn_cpu():
pid = os.getpid()
# burn a core @ ~ 50kHz
for _ in range(50000):
await trio.sleep(1/50000/50)
return os.getpid()
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(burn_cpu)
# burn rubber in the parent too
await burn_cpu()
# wait on result from target function
pid = await portal.result()
# end of nursery block
print(f"Collected subproc {pid}")
if __name__ == '__main__':
trio.run(main)
This runs ``burn_cpu()`` in a new process and reaps it on completion
of the nursery block.
If you only need to run a sync function and retreive a single result, you
might want to check out `trio-parallel`_.
Zombie safe: self-destruct a process tree
-----------------------------------------
``tractor`` tries to protect you from zombies, no matter what.
.. code:: python
"""
Run with a process monitor from a terminal using::
$TERM -e watch -n 0.1 "pstree -a $$" \
& python examples/parallelism/we_are_processes.py \
&& kill $!
"""
from multiprocessing import cpu_count
import os
import tractor
import trio
async def target():
print(
f"Yo, i'm '{tractor.current_actor().name}' "
f"running in pid {os.getpid()}"
)
await trio.sleep_forever()
async def main():
async with tractor.open_nursery() as n:
for i in range(cpu_count()):
await n.run_in_actor(target, name=f'worker_{i}')
print('This process tree will self-destruct in 1 sec...')
await trio.sleep(1)
# raise an error in root actor/process and trigger
# reaping of all minions
raise Exception('Self Destructed')
if __name__ == '__main__':
try:
trio.run(main)
except Exception:
print('Zombies Contained')
If you can create zombie child processes (without using a system signal)
it **is a bug**.
"Native" multi-process debugging
--------------------------------
Using the magic of `pdbp`_ and our internal IPC, we've
been able to create a native feeling debugging experience for
any (sub-)process in your ``tractor`` tree.
.. code:: python
from os import getpid
import tractor
import trio
async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor."
while True:
yield 'yo'
await tractor.breakpoint()
async def name_error():
"Raise a ``NameError``"
getattr(doggypants)
async def main():
"""Test breakpoint in a streaming actor.
"""
async with tractor.open_nursery(
debug_mode=True,
loglevel='error',
) as n:
p0 = await n.start_actor('bp_forever', enable_modules=[__name__])
p1 = await n.start_actor('name_error', enable_modules=[__name__])
# retreive results
stream = await p0.run(breakpoint_forever)
await p1.run(name_error)
if __name__ == '__main__':
trio.run(main)
You can run this with::
>>> python examples/debugging/multi_daemon_subactors.py
And, yes, there's a built-in crash handling mode B)
We're hoping to add a respawn-from-repl system soon!
SC compatible bi-directional streaming
--------------------------------------
Yes, you saw it here first; we provide 2-way streams
with reliable, transitive setup/teardown semantics.
Our nascent api is remniscent of ``trio.Nursery.start()``
style invocation:
.. code:: python
import trio
import tractor
@tractor.context
async def simple_rpc(
ctx: tractor.Context,
data: int,
) -> None:
'''Test a small ping-pong 2-way streaming server.
'''
# signal to parent that we're up much like
# ``trio_typing.TaskStatus.started()``
await ctx.started(data + 1)
async with ctx.open_stream() as stream:
count = 0
async for msg in stream:
assert msg == 'ping'
await stream.send('pong')
count += 1
else:
assert count == 10
async def main() -> None:
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'rpc_server',
enable_modules=[__name__],
)
# XXX: this syntax requires py3.9
async with (
portal.open_context(
simple_rpc,
data=10,
) as (ctx, sent),
ctx.open_stream() as stream,
):
assert sent == 11
count = 0
# receive msgs using async for style
await stream.send('ping')
async for msg in stream:
assert msg == 'pong'
await stream.send('ping')
count += 1
if count >= 9:
break
# explicitly teardown the daemon-actor
await portal.cancel_actor()
if __name__ == '__main__':
trio.run(main)
See original proposal and discussion in `#53`_ as well
as follow up improvements in `#223`_ that we'd love to
hear your thoughts on!
.. _#53: https://github.com/goodboy/tractor/issues/53
.. _#223: https://github.com/goodboy/tractor/issues/223
Worker poolz are easy peasy
---------------------------
The initial ask from most new users is *"how do I make a worker
pool thing?"*.
``tractor`` is built to handle any SC (structured concurrent) process
tree you can imagine; a "worker pool" pattern is a trivial special
case.
We have a `full worker pool re-implementation`_ of the std-lib's
``concurrent.futures.ProcessPoolExecutor`` example for reference.
You can run it like so (from this dir) to see the process tree in
real time::
$TERM -e watch -n 0.1 "pstree -a $$" \
& python examples/parallelism/concurrent_actors_primes.py \
&& kill $!
This uses no extra threads, fancy semaphores or futures; all we need
is ``tractor``'s IPC!
"Infected ``asyncio``" mode
---------------------------
Have a bunch of ``asyncio`` code you want to force to be SC at the process level?
Check out our experimental system for `guest-mode`_ controlled
``asyncio`` actors:
.. code:: python
import asyncio
from statistics import mean
import time
import trio
import tractor
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
# a first message must be sent **from** this ``asyncio``
# task or the ``trio`` side will never unblock from
# ``tractor.to_asyncio.open_channel_from():``
to_trio.send_nowait('start')
# XXX: this uses an ``from_trio: asyncio.Queue`` currently but we
# should probably offer something better.
while True:
# echo the msg back
to_trio.send_nowait(await from_trio.get())
await asyncio.sleep(0)
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
):
# this will block until the ``asyncio`` task sends a "first"
# message.
async with tractor.to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
await chan.send(msg)
out = await chan.receive()
# echo back to parent actor-task
await stream.send(out)
async def main():
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_server',
enable_modules=[__name__],
infect_asyncio=True,
)
async with p.open_context(
trio_to_aio_echo_server,
) as (ctx, first):
assert first == 'start'
count = 0
async with ctx.open_stream() as stream:
delays = []
send = time.time()
await stream.send(count)
async for msg in stream:
recv = time.time()
delays.append(recv - send)
assert msg == count
count += 1
send = time.time()
await stream.send(count)
if count >= 1e3:
break
print(f'mean round trip rate (Hz): {1/mean(delays)}')
await p.cancel_actor()
if __name__ == '__main__':
trio.run(main)
Yes, we spawn a python process, run ``asyncio``, start ``trio`` on the
``asyncio`` loop, then send commands to the ``trio`` scheduled tasks to
tell ``asyncio`` tasks what to do XD
We need help refining the `asyncio`-side channel API to be more
`trio`-like. Feel free to sling your opinion in `#273`_!
.. _#273: https://github.com/goodboy/tractor/issues/273
Higher level "cluster" APIs
---------------------------
To be extra terse the ``tractor`` devs have started hacking some "higher
level" APIs for managing actor trees/clusters. These interfaces should
generally be condsidered provisional for now but we encourage you to try
them and provide feedback. Here's a new API that let's you quickly
spawn a flat cluster:
.. code:: python
import trio
import tractor
async def sleepy_jane():
uid = tractor.current_actor().uid
print(f'Yo i am actor {uid}')
await trio.sleep_forever()
async def main():
'''
Spawn a flat actor cluster, with one process per
detected core.
'''
portal_map: dict[str, tractor.Portal]
results: dict[str, str]
# look at this hip new syntax!
async with (
tractor.open_actor_cluster(
modules=[__name__]
) as portal_map,
trio.open_nursery() as n,
):
for (name, portal) in portal_map.items():
n.start_soon(portal.run, sleepy_jane)
await trio.sleep(0.5)
# kill the cluster with a cancel
raise KeyboardInterrupt
if __name__ == '__main__':
try:
trio.run(main)
except KeyboardInterrupt:
pass
.. _full worker pool re-implementation: https://github.com/goodboy/tractor/blob/master/examples/parallelism/concurrent_actors_primes.py
Install
-------
From PyPi::
pip install tractor
From git::
pip install git+git://github.com/goodboy/tractor.git
Under the hood
--------------
``tractor`` is an attempt to pair trionic_ `structured concurrency`_ with
distributed Python. You can think of it as a ``trio``
*-across-processes* or simply as an opinionated replacement for the
stdlib's ``multiprocessing`` but built on async programming primitives
from the ground up.
Don't be scared off by this description. ``tractor`` **is just** ``trio``
but with nurseries for process management and cancel-able streaming IPC.
If you understand how to work with ``trio``, ``tractor`` will give you
the parallelism you may have been needing.
Wait, huh?! I thought "actors" have messages, and mailboxes and stuff?!
***********************************************************************
Let's stop and ask how many canon actor model papers have you actually read ;)
From our experience many "actor systems" aren't really "actor models"
since they **don't adhere** to the `3 axioms`_ and pay even less
attention to the problem of *unbounded non-determinism* (which was the
whole point for creation of the model in the first place).
From the author's mouth, **the only thing required** is `adherance to`_
the `3 axioms`_, *and that's it*.
``tractor`` adheres to said base requirements of an "actor model"::
In response to a message, an actor may:
- send a finite number of new messages
- create a finite number of new actors
- designate a new behavior to process subsequent messages
**and** requires *no further api changes* to accomplish this.
If you want do debate this further please feel free to chime in on our
chat or discuss on one of the following issues *after you've read
everything in them*:
- https://github.com/goodboy/tractor/issues/210
- https://github.com/goodboy/tractor/issues/18
Let's clarify our parlance
**************************
Whether or not ``tractor`` has "actors" underneath should be mostly
irrelevant to users other then for referring to the interactions of our
primary runtime primitives: each Python process + ``trio.run()``
+ surrounding IPC machinery. These are our high level, base
*runtime-units-of-abstraction* which both *are* (as much as they can
be in Python) and will be referred to as our *"actors"*.
The main goal of ``tractor`` is is to allow for highly distributed
software that, through the adherence to *structured concurrency*,
results in systems which fail in predictable, recoverable and maybe even
understandable ways; being an "actor model" is just one way to describe
properties of the system.
What's on the TODO:
-------------------
Help us push toward the future of distributed `Python`.
- Erlang-style supervisors via composed context managers (see `#22
<https://github.com/goodboy/tractor/issues/22>`_)
- Typed messaging protocols (ex. via ``msgspec.Struct``, see `#36
<https://github.com/goodboy/tractor/issues/36>`_)
- Typed capability-based (dialog) protocols ( see `#196
<https://github.com/goodboy/tractor/issues/196>`_ with draft work
started in `#311 <https://github.com/goodboy/tractor/pull/311>`_)
- We **recently disabled CI-testing on windows** and need help getting
it running again! (see `#327
<https://github.com/goodboy/tractor/pull/327>`_). **We do have windows
support** (and have for quite a while) but since no active hacker
exists in the user-base to help test on that OS, for now we're not
actively maintaining testing due to the added hassle and general
latency..
Feel like saying hi?
--------------------
This project is very much coupled to the ongoing development of
``trio`` (i.e. ``tractor`` gets most of its ideas from that brilliant
community). If you want to help, have suggestions or just want to
say hi, please feel free to reach us in our `matrix channel`_. If
matrix seems too hip, we're also mostly all in the the `trio gitter
channel`_!
.. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
.. _multi-processing: https://en.wikipedia.org/wiki/Multiprocessing
.. _trio: https://github.com/python-trio/trio
.. _nurseries: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/#nurseries-a-structured-replacement-for-go-statements
.. _actor model: https://en.wikipedia.org/wiki/Actor_model
.. _trionic: https://trio.readthedocs.io/en/latest/design.html#high-level-design-principles
.. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich
.. _3 axioms: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=162s
.. .. _3 axioms: https://en.wikipedia.org/wiki/Actor_model#Fundamental_concepts
.. _adherance to: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=1821s
.. _trio gitter channel: https://gitter.im/python-trio/general
.. _matrix channel: https://matrix.to/#/!tractor:matrix.org
.. _pdbp: https://github.com/mdmintz/pdbp
.. _pdb++: https://github.com/pdbpp/pdbpp
.. _guest mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. _messages: https://en.wikipedia.org/wiki/Message_passing
.. _trio docs: https://trio.readthedocs.io/en/latest/
.. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _structured chadcurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony
.. _async generators: https://www.python.org/dev/peps/pep-0525/
.. _trio-parallel: https://github.com/richardsheridan/trio-parallel
.. _msgspec: https://jcristharif.com/msgspec/
.. _guest-mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fgoodboy%2Ftractor%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/goodboy/tractor/goto
.. |docs| image:: https://readthedocs.org/projects/tractor/badge/?version=latest
:target: https://tractor.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. |logo| image:: _static/tractor_logo_side.svg
:width: 250
:align: middle

View File

@ -0,0 +1,458 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 24.2.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 2670 1980" style="enable-background:new 0 0 2670 1980;" xml:space="preserve">
<style type="text/css">
.st0{fill:#0A0A0A;}
.st1{fill:#FCFCFB;}
</style>
<g>
<path class="st0" d="M2275.3,1000.1c-0.1,2.3,0,4.7,0,7c0,189.3,0,378.7,0,568c0,6,0.1,12.1-0.8,17.9c-2.4,15.2-12.9,17.4-23.6,10
c-45.4-31-91.2-61.4-136.9-91.9l-30.1-20.3c-9.8-6.3-19.2-13.2-29.2-19.3c-6.7-4.1-9.8-9.7-9.9-17.2c-0.1-9.3-0.6-18.7-0.3-28
c0.2-7.3-1.7-13-6.5-19c-18.6-23.3-36.2-47.3-56.9-69c-18-18.8-36.7-36.8-56.4-53.8c-31.8-27.5-65.4-52.7-101-74.9
c-32.7-20.4-66.8-38.4-102.2-54c-50.7-22.4-103.2-39.2-157.1-51.5c-36.7-8.4-73.8-14.4-111.2-18.5c-34.5-3.8-69-6.5-103.7-5.8
c-23.1,0.5-46.2-1.3-69.3,0.4c-22.9,1.8-45.9,3-68.7,5.6c-72.7,8.4-143.7,24.6-212.1,50.6c-65.5,24.8-127.4,57-184.9,97.4
c-46.8,32.8-90.1,69.8-129.5,111.1c-18.1,19-35.1,39.2-50.5,60.6c-3.9,5.5-5.5,11-5.3,17.6c0.4,9.3,0,18.7,0,28
c0,9.3-3.1,16.4-11.8,21.4c-10.4,5.9-19.9,13.4-30.1,19.8l-25.4,17.2c-11.6,7.8-23.3,15.5-34.9,23.3c-34.3,23-68.6,46-102.9,69
c-14.6,9.7-23.6,5.5-25.2-12.1c-0.5-5.3-0.5-10.7-0.5-16c0-189.7,0-379.3,0-569l-0.1-26.1c0.2-190.3,0.1-380.7,0.2-571
c0-5.7,0.4-11.3,0.5-17c0.1-3.7,1.2-7.3,3-10.5c3.4-5.9,9.3-7.7,15.4-4.8c2.7,1.3,5.1,3.1,7.6,4.8c44.3,29.6,88.5,59.3,132.8,88.9
l29.7,20.1c10.8,6.6,20.9,14.3,31.7,20.9c6.7,4.1,9.7,9.8,9.8,17.3c0.2,11,0.4,22,0.2,33c-0.1,5.2,1.2,9.6,4.2,13.9
c15.4,22.2,33,42.8,51.6,62.2c53.9,56.4,114.1,105.1,181.4,144.6c38.4,22.6,78.3,42.4,119.9,58.6c40,15.6,80.9,28.3,122.8,38
c52.7,12.1,106,19.3,160,22.6c18.3,1.1,36.6,2.1,54.9,1.5c17.7-0.6,35.5,1.1,53.2-0.1c17.6-1.1,35.2-2.3,52.8-3.6
c35.9-2.5,71.4-8.5,106.7-15.1c53.4-9.9,105.5-25,156-45.1c53.8-21.5,105.1-48.2,153.2-80.9c43.4-29.5,83.6-62.7,121.1-99.3
c27.3-26.6,50.3-56.8,73.9-86.6c2.9-3.6,3.6-7.6,3.6-12.1c-0.1-10.7,0.2-21.3,0.4-32c0.1-7.5,3.2-13.1,9.9-17.2
c10.2-6.2,19.9-13.4,30-19.8l23.2-16.3c4.2-1.3,7.4-4.3,11-6.7c42.9-28.7,85.8-57.4,128.7-86.1c1.1-0.7,2.3-1.4,3.3-2.3
c10.9-9.1,21.4-4.7,23.8,10.7c0.8,5.6,0.6,11.3,0.6,17c0,189.3,0,378.7,0,568L2275.3,1000.1z M740,1203.1c-2-0.2-4-0.3-6-0.5
c-0.5,2.5-4.8,2-4.2,5.8c2.9-1,6.4-1.2,4.2-5.8C735.8,1206,738.1,1202.3,740,1203.1c-0.6-0.7-0.5-1.3,0.2-1.7
c-1.1-0.3-1.1-0.1-0.4,0.8C740,1202.3,739.9,1202.8,740,1203.1z M1935.5,780.5c-0.2,0.2-0.4,0.4-0.6,0.4c-0.3,0.1-0.6,0-0.8,0
c0.4-0.2,0.8-0.5,1.1-0.8c1.4-0.9,2-1.9,0.5-3.3c3.2,1.1,6.1,1.2,7.9-2.8c-3.3-0.7-4.6,3.8-7.7,2.9
C1935.7,778.2,1935.6,779.4,1935.5,780.5z M1934.8,1202.5c-0.1-0.2-0.1-0.4-0.2-0.6c0,0,0.1-0.1,0.1-0.1c0,0.2,0.1,0.3,0.1,0.5
c3.1,1.6,5,5.7,11.3,5.1C1941.3,1205,1938.9,1202,1934.8,1202.5z M2072.4,516.6c-1,2.3-3.4,3-5.3,4.3c-7.4,5.1-8.2,7.2-5.3,17.8
c4.2-6.5,8-12.6,11.9-18.6c1.1-1.7,0.9-3.1-1.3-3.6c0,1,0.3,1.8,1.4,2c0.2,0,0.4-0.4,0.6-0.7C2073.8,517.4,2073.1,517,2072.4,516.6
z M599.6,517.5c-0.4-0.8-1.1-1.7-1.8-0.8c-0.8,0.8,0.1,1.5,1,1.6c2.4,8.9,7.5,16.2,13.4,24.1c2.3-16.5,1.9-17.3-10.7-24.4
C600.9,517.6,600.2,517.6,599.6,517.5z M534.9,1353.5c-20,21.8-37.3,44.1-53.4,67.4c-16.1,23.4-26.8,49.5-41.8,73.5
c9-21,17.4-42.2,28.9-61.9c16.9-28.7,35.4-56.4,57.5-81.5c0.9-1,1.6-2.1,2.6-3.1c2.9-2.9,3.3-6,2.3-10
c-9.1-33.8-14.6-68.3-19.2-103c-5-38-8.3-76.1-10.6-114.3c-2.5-39.9-3.4-79.9-3.9-119.8c-0.1-5.1-1.4-6.8-6.7-6.9
c-21.6-0.2-43.3-0.8-64.9-1.3c-2,0-4-0.4-6,0.8c0,195.2,0,390.5,0,585.6c2.1,0.6,3.1-0.4,4.1-1.1c52.7-33.8,105.3-67.7,158-101.4
c3.6-2.3,5.7-5.4,7.3-9.1c6-13,12.5-25.8,20.8-37.4c4.2-5.9,3.9-11.9,2.5-19.1c-2.4,1.8-4,2.9-5.5,4.2
c-13.9,12.6-32.1,8.7-42.7-2.9c-7.3-8-12.4-17.2-17.1-26.8C542.3,1375.7,539.4,1365.1,534.9,1353.5z M534.6,627.2
c3.6-5,4.3-10.7,6.3-15.9c6.2-15.8,12.6-31.5,25.2-43.7c10.8-10.5,26.8-12.6,38.9-2.8c2.3,1.9,4.3,5,7.9,5
c1.6-8.3-0.2-15.3-5.1-21.9c-7.4-9.9-12.7-21-17.8-32.3c-2.4-5.3-5.8-9.2-10.8-12.4c-50.5-32.2-100.9-64.7-151.3-97.1
c-2.4-1.5-4.5-3.9-8.7-3.9c0,3.7,0,7.2,0,10.7c0,187.9,0,375.8,0,563.7c0,2,0.2,4,0,6c-0.6,4.9,1.4,6.3,6.2,6.1
c13.6-0.6,27.3-0.7,41-1c8.3-0.2,16.7-0.6,25-0.4c4.9,0.1,6.7-1.8,6.1-6.4c-0.1-1,0-2,0-3c1.2-75,4.3-149.8,13.7-224.3
c4.6-36.3,10-72.5,19.6-107.9c1.6-5.7,0.4-9.9-3.4-14.4c-10.7-12.8-21.7-25.4-31-39.1c-19-28.1-36.8-57-49.6-88.7
c-2.2-5.4-5.5-10.4-6.2-16.6C465,538.3,494.6,586,534.6,627.2z M2060.9,1411.1c-2.3,8.5-0.1,15.1,4,21.4
c5.9,9.3,13.4,18.1,17.1,28.2c3.9,10.7,10.9,16.6,19.9,22.4c47.8,30.3,95.3,61.1,143,91.7c2.7,1.7,5,4.3,9.5,4.5c0-3.4,0-6.6,0-9.8
c0-188.3,0-376.6,0-565c0-2-0.1-4,0-6c0.2-3.4-1.1-5-4.6-4.8c-2.3,0.1-4.7,0-7,0.1c-20,0.6-40,1.2-60,1.9c-6.7,0.2-6.7,0.4-7,7.4
c0,1.3,0,2.7-0.1,4c-0.4,60-3,119.9-8.9,179.7c-5,50.1-11.3,99.9-24,148.7c-1.4,5.2,0.1,8.3,3.5,12.3c15,17.8,29.6,36.1,42.4,55.5
c20,30.4,36.1,62.9,49.3,96.8c3.2,8.2,5.7,16.6,6.8,25.4c-33.9-90.8-82.6-145-107.2-169.7c-4.1,11-7.5,22.1-12.7,32.5
c-4.9,9.9-10.3,19.5-18.8,26.8c-7.1,6.1-15.1,8.9-24.8,7.5C2073.2,1421.4,2068.3,1415.5,2060.9,1411.1z M2244.8,454.3
c0.6,4.5-0.2,6.7-0.8,8.9c-6.5,24.2-17,47-28.2,69.2c-18.9,37.2-43,71.2-70.4,102.8c-2.7,3.2-3.6,5.8-2.5,9.8
c7.9,29.6,13.1,59.8,17.3,90.2c6.4,45.6,10.2,91.3,12.8,137.3c2,35.3,2.6,70.6,3.1,105.9c0.1,7.6,0.2,7.6,7.6,7.8
c17,0.5,34,1,51,1.4c5,0.1,10,0.2,15,0.5c3.2,0.1,5.1-0.9,4.9-4.5c-0.1-1.7,0-3.3,0-5c0-189.6,0-379.3-0.1-568.9
c0-2.4,1.1-5.2-1.2-7.3c-0.8,0.1-1.6,0-2.1,0.4c-52.7,33.8-105.3,67.7-158,101.5c-2,1.3-3.7,2.6-4.7,4.9c-6.9,16-16.9,30.1-25.8,45
c-3.1,5.2-3.4,10.2-2,15.7c2.6,0.1,3.5-1.6,4.6-2.6c15.7-13.9,33-10.6,45.6,3.8c6,6.8,10.4,14.6,14.4,22.7
c5.1,10.3,8.3,21.3,12.8,32.9C2185.2,576.1,2221.7,520.5,2244.8,454.3z M553.7,662.8c-0.3,1.3-0.7,2.8-1,4.3
c-7.1,31.2-11.7,62.8-15.4,94.5c-7.1,59.9-10.3,120.1-11.8,180.4c-0.3,13-0.2,26-0.5,39c-0.1,4.4,1.5,5.7,5.7,5.6
c25-0.5,50-0.7,74.9-1.1c7.4-0.1,7.6-0.2,7.6-7.8c0-85.6,0-171.2,0.1-256.8c0-5.3-1.7-8.5-5.8-11.7c-16.6-12.8-32.4-26.6-46.9-41.9
C559,665.6,557.6,663,553.7,662.8z M553.6,1319.7c2.1-1,2.8-1.2,3.2-1.6c16.4-17.1,34-32.9,52.7-47.4c3-2.4,3.8-5,3.8-8.6
c-0.1-13.3-0.1-26.7-0.1-40c0-72.3,0-144.7,0-217c0-10.3,0.4-9-9.2-9.2c-23.3-0.4-46.7-0.8-70-1.1c-2.8,0-5.9-1-9.3,1.5
C526.2,1104.4,531.1,1212.3,553.6,1319.7z M2119.8,1319.2c4.8-20.9,8.3-40.2,11.1-59.6c5.8-39.5,9.8-79.2,12.5-119.1
c3.1-45.5,4.7-91.1,5.1-136.7c0.1-7.6-0.1-7.7-7.6-7.7c-24.3,0.1-48.6,0.3-72.9,0.5c-7.9,0.1-8,0.1-8,8.5c0,84.6,0,169.1-0.1,253.7
c0,5.6,1.7,9.3,5.9,12.7c12.1,9.9,24.2,19.9,36,30.1C2107.7,1306.9,2113.1,1312.7,2119.8,1319.2z M2120.4,664.4
c-2.1-0.6-2.9-0.1-3.6,0.7c-15.2,16.5-33.1,29.9-50.2,44.2c-4.8,4-6.8,8-6.8,14.3c0.3,32,0.2,64,0.2,96c0,52.3,0,104.6,0,157
c0,8.1,0.1,8.2,8.2,8.2c18.3,0.2,36.7,0.2,55,0.4c6.7,0.1,13.3,0.1,20,0.3c3.7,0.1,5.7-1.1,5.4-5.1c-0.2-2.3,0-4.7-0.1-7
c-0.5-45-2.1-89.9-5.2-134.8C2139.2,780.1,2132.7,721.9,2120.4,664.4z M878.7,1138.5c-36,15-69.8,32.9-102.3,53.1
c-3.4,2.1-4.9,4.7-5.7,8.6c-2.8,14.4-5.7,28.8-12.2,42c-3.6,7.3-7.7,14.4-17.3,14.4c-9.6,0-13.4-7.4-17.4-14.4
c-1.9-3.2-2-7.6-5.4-9.8c-22.3,17.4-41.6,37.2-58.8,58.9c-5.8,7.3-10.2,15.3-12,24.8c-3.8,20.3-9.1,40.2-16.7,59.5
c-2.6,6.7-1.3,13.3-1.5,20.3c1.5-0.4,2-0.3,2.1-0.5c3.2-3.8,6.4-7.6,9.4-11.6c31.2-40.5,66-77.4,105.4-110.2
c36.9-30.8,75.6-59.1,117.2-83.1c54.3-31.2,110.9-57.4,170.8-76.4c38.4-12.2,77.4-21.8,117-29.2c9.5-1.8,19.1-3,28.6-4.6
c1.5-0.3,3.8,0.3,4.4-1.5c1.4-3.9,0.6-8-0.4-11.7c-1-3.6-4.5-1.7-6.9-1.5c-40.6,3.2-80.4,11-120.4,18c-7,1.2-10.4,4.1-10.9,11.2
c-0.3,3.8-0.5,9.1-5.8,9c-4.6-0.1-4.7-5-5.3-8.5c-0.7-4.5-2.4-6-7.3-4.9c-39.3,9.1-78,19.9-115.5,34.7c-4.6,1.8-7.1,4.2-7.8,8.9
c-0.9,5.9-2.2,11.8-5.2,17.1c-4.1,7.1-10.6,7.2-14.6,0C881.9,1147.7,880.7,1143.4,878.7,1138.5z M1032.8,891.9
c0.9-3.8,1.6-6.7,2.4-9.5c0.7-2.3,1.7-4.6,4.4-4.9c3.4-0.3,4.4,2.3,5.3,4.9c0.4,1.3,0.8,2.6,0.8,3.9c0.1,7.7,4.5,10.4,11.7,11.6
c32.5,5.5,64.8,11.9,97.5,15.7c7.9,0.9,15.9,1.7,23.8,2.4c1.9,0.2,4,0.7,5.3-1.6c3.5-6.5,0.3-13-7-14.2c-1-0.2-2-0.2-3-0.4
c-42.6-5.8-84.2-15.9-125.4-28.2c-41.8-12.4-82.5-28.2-121.6-47.1c-51-24.6-99.8-53.3-144.7-87.9c-20.6-15.8-40.9-32.1-59.9-49.7
c-29.1-26.9-56.4-55.7-80.1-87.7c-3.7-4.9-7.7-9.6-11.6-14.3c-0.8,7.6-2.4,14.7,0.5,21.8c7.4,18.2,12.4,37.2,16,56.5
c1.7,9.1,5,17.3,10.6,24.4c9.2,11.4,18.5,22.8,28.7,33.3c9.9,10.1,21.1,19,32.2,28.9c3.8-6.5,5.6-13,9.8-18.2
c7.7-9.4,17.7-9.5,25.3-0.1c3.6,4.5,5.7,9.7,7.8,15c4.3,10.9,6.8,22.3,8.9,33.7c0.9,5.2,3.3,8.1,7.5,10.7
c21.9,13.4,44.1,26,67.3,37.1c10.9,5.2,22.1,10,33.5,15.2c1.8-4.8,3.1-8.6,4.7-12.2c1.6-3.5,4.2-6.3,8.3-6.1
c3.6,0.2,5.8,2.9,7.3,6.1c2.4,5.2,3.8,10.6,4.8,16.2c0.8,4.6,2.9,7.2,7.7,9C950.6,871.5,990.8,883,1032.8,891.9z M1953.8,1233.7
c-2.9,5.8-4.9,11.5-8.8,16.2c-7.6,9.3-17.8,9.3-25.3,0c-3.4-4.2-5.5-9-7.5-14c-4.4-11.5-7.2-23.5-9.4-35.7
c-0.7-3.9-2.4-6.4-5.8-8.6c-30.4-19.4-62.4-35.6-95-50.8c-6.9-3.3-7-3-9.9,4.7c-0.7,1.9-1.5,3.7-2.3,5.5c-1.6,3.1-3.7,5.6-7.6,5.6
c-3.8,0-5.9-2.5-7.6-5.6c-2.9-5.4-4.3-11.3-5.1-17.2c-0.7-5-3-7.3-7.5-9.1c-37.8-15.2-77.3-24.6-116.7-34.3c-3.8-0.9-5.4,0.2-6.2,4
c-0.7,3.9-0.2,9.9-6.3,9.5c-5-0.3-5.1-6.1-5.3-9.6c-0.4-6.6-3.3-8.9-9.6-10c-32.8-5.6-65.4-12.2-98.6-15.7
c-8.3-0.9-16.5-2.1-24.8-2.7c-5.4-0.4-7.1,1.2-7.5,6.6c-0.3,3.9-1.3,8,5.2,8.4c6.3,0.4,12.4,2.1,18.7,3.1
c47.7,7.5,94.4,19.1,140.3,34.3c42.4,14,83.5,31.2,123,52c46.3,24.4,90.2,52.6,131,85.6c48.5,39.1,93.7,81.2,129.4,132.6
c1.9,2.8,4.8,7.5,7.3,6.7c4.5-1.5,1.4-6.8,2.1-10.4c0.6-3.4-0.5-6.6-1.8-9.7c-7.8-19.5-12.6-40-17.1-60.4c-1.1-5.1-3-9.4-6-13.5
c-17-22.8-36.7-43.1-57.6-62.2C1959.6,1237.1,1957.6,1234.7,1953.8,1233.7z M2043.6,585.1c-4.2,2.7-6.2,6.2-8.5,9.2
c-29.9,40.3-64.6,76.1-102.5,109c-33.8,29.3-69.5,56.2-107.6,79.4c-51.8,31.6-106.3,57.7-163.9,77.3
c-40.7,13.8-82.1,24.9-124.2,33.1c-13.7,2.7-27.6,4.7-41.3,7c-2.2,0.4-5.3-0.1-6.2,2.4c-1.4,3.8-1.1,8.2,0.3,11.7
c1.4,3.5,5.3,1.6,8.1,1.3c40.1-4.2,79.8-10.7,119.5-18.1c6.2-1.2,10.4-2.7,10.5-10.2c0-3.5,0.3-9.3,5.4-9.5c6-0.3,5.5,5.7,6.2,9.6
c0.7,3.9,2.3,4.9,6.1,3.9c10-2.7,20-5.1,30-7.5c28.5-6.9,56.4-15.6,83.9-25.7c6.8-2.5,10.2-6.3,11-13.3c0.5-5,2-9.8,4.5-14.2
c4.2-7.6,10.8-7.6,15.1,0.1c1.6,2.9,2.5,6.2,3.4,9.3c0.9,2.9,2.5,3.5,5.2,2.2c2.4-1.2,5-1.9,7.3-3.1c26.6-13.1,53.2-26,78.7-41.2
c11.9-7.1,19.6-14.7,21.3-29.3c1.4-12.4,5.5-24.8,12.8-35.4c8.1-11.7,19.5-11.7,27.5,0.1c3.1,4.6,5.4,9.7,8.2,14.9
c1.9-1.4,3.6-2.5,5.1-3.8c22.2-19.8,42.7-41.1,60.6-64.9c2.6-3.5,4.3-7.3,5.2-11.7c4.5-21.2,9.6-42.2,17.6-62.4
C2045.3,599.2,2044.5,592.6,2043.6,585.1z M777.2,1163.8c1.2,0.2,1.6,0.4,1.9,0.3c30.4-14.3,60.9-28.3,92.6-39.6
c4.2-1.5,4.4-3.7,3.9-7.5c-4.5-37.4-6.3-75-6.5-112.6c0-7.8-0.2-7.9-8.5-7.9c-22,0-44,0-65.9,0c-8.6,0-8.8,0.1-8.8,8.7
c-0.4,45.3-2.2,90.5-6.6,135.6C778.4,1148.4,776.3,1155.9,777.2,1163.8z M776.8,817.8c0.3,3.8,0.5,7.1,0.9,10.4
c2.5,21.2,4.1,42.4,5.5,63.7c1.8,28.6,2.3,57.2,2.7,85.9c0.1,7.1,0.3,7.3,7.2,7.3c22.6,0.1,45.3,0.1,67.9,0.1
c8.8,0,8.1-0.2,8.1-8.3c0.1-37,1.9-73.9,6.4-110.6c0.7-5.8-0.5-8.2-6-10.1c-29.5-10.3-58-23.3-86.2-36.8
C781.5,818.4,779.9,817.2,776.8,817.8z M1896.4,1163.9c1-2.7,0.2-5.3-0.1-7.9c-2.4-19.8-3.9-39.8-5.3-59.7
c-2.1-30.3-2.5-60.6-3.3-90.9c-0.2-8.7-0.2-8.8-8.6-8.8c-16.7-0.1-33.3,0-50-0.1c-6.3,0-12.7,0.2-19-0.1c-3.9-0.1-5.5,1.4-5.2,5.3
c0.1,1.7,0.1,3.3,0,5c-1.1,36-2,72-6.4,107.7c-0.9,7.7-1.1,7.9,6.3,10.6c28.5,10.4,56.1,22.9,83.3,36.1
C1890.6,1162.4,1893.1,1164.4,1896.4,1163.9z M1896.8,817.7c-2.5-0.3-4.3,0.6-6.1,1.5c-6,2.9-12,5.6-18,8.6
c-22.1,10.9-44.9,20.1-67.9,28.7c-7.5,2.8-7.3,3-6.5,10.6c1,8.9,1.9,17.9,2.7,26.8c2.4,28.2,3.2,56.5,3.6,84.7
c0.1,5.4,1.8,6.7,6.8,6.6c22.6-0.2,45.3-0.1,67.9-0.2c1,0,2-0.1,3,0c3.8,0.3,5.6-1.2,5.2-5.1c-0.2-1.6,0-3.3,0.1-5
c0.7-38.6,1.7-77.2,5.2-115.6C1894.1,845.4,1895.5,831.6,1896.8,817.7z M908.3,1109.6c1.4,0,2.4,0.2,3.2,0
c37.8-11.7,75.5-24,114.2-32.5c4.5-1,5.8-2.9,5.5-7.4c-1.3-22-2.5-43.9-2.3-65.9c0.1-7.3-0.2-7.4-8-7.4c-33.3,0-66.7,0.1-100,0.2
c-7.7,0-7.9,0.1-7.9,7.5c0,31-1.6,61.9-4,92.9C908.8,1101,907.3,1105.2,908.3,1109.6z M1765.3,1110.4c0-2.5,0.1-3.5,0-4.4
c-3.1-33.9-4.7-67.8-4.8-101.8c0-7.4-0.4-7.6-8.2-7.7c-33.3-0.1-66.7-0.2-100-0.2c-7.3,0-7.2,0.1-7.5,7.9c-0.7,21-1.1,42-2.3,62.9
c-0.5,9-1.3,8.3,7.6,10.4c29.8,7.1,59.4,14.9,88.5,24.5C1747.2,1104.8,1755.7,1107.4,1765.3,1110.4z M908.8,872.2
c-0.2,0.3-0.6,0.6-0.6,0.8c2.8,35.5,5.2,71,4.8,106.7c-0.1,4.6,1.9,5.6,6,5.6c34.7-0.1,69.3-0.1,104,0.1c4.9,0,6.2-1.8,6.1-6.4
c-0.5-22,0.8-44,2.1-65.9c0.3-5.2-0.9-7.7-6.4-8.8c-36.9-7.6-72.4-20.1-108.5-30.5C913.8,873,911.6,871.1,908.8,872.2z
M1765.6,872.4c-3.2-0.6-5.1,0.3-7,0.9c-22.2,6.7-44.2,14.5-66.7,20.3c-14.5,3.7-29,7.3-43.5,10.8c-4.4,1.1-6.5,2.6-6.2,8
c1.3,21.9,1.8,43.9,2.6,65.8c0.2,7,0.2,7.1,7.1,7.1c33.6,0,67.2,0,100.8-0.1c7.5,0,7.7-0.2,7.7-7.8c0.2-24.6,0.9-49.2,2.6-73.8
C1763.8,893.4,1764.7,883.1,1765.6,872.4z M725.8,1192.5c8.7-4.8,15.8-9.8,23.8-12.8c9.6-3.7,12.7-10.5,13-19.7
c0.1-1.3,0.2-2.7,0.4-4c3.4-27.1,5.6-54.3,6.7-81.5c1-23.6,1.5-47.2,2-70.8c0.2-7-0.1-7-7.3-7.1c-6.7-0.1-13.3,0-20,0
c-9,0-18,0.2-27,0c-4.2-0.1-6.2,1.3-5.9,5.7c0.2,2.3,0,4.7,0.1,7c0.8,27.6,1.2,55.2,3,82.8C717,1125.1,719.9,1158.2,725.8,1192.5z
M1946.2,1191.7c2.9-2.2,2.4-4.7,2.8-6.9c4.5-26.6,6.9-53.5,8.9-80.4c2.5-33.9,3.2-67.9,3.7-101.8c0.1-5-1.8-6.2-6.4-6.1
c-15.3,0.2-30.6,0-46,0.1c-7.6,0-7.8,0.2-7.7,7.8c0.7,44,2,87.9,6.7,131.6c1.1,10.3,2.2,20.5,3.5,30.7c0.4,2.8,0.2,6,3.5,7.8
C1925.8,1180.2,1936.2,1186.1,1946.2,1191.7z M1947.7,789.1c-9.1,5-16.9,10.1-25.3,13.8c-8.2,3.6-10.5,9.5-11.5,17.8
c-3,27.1-5.6,54.3-6.9,81.6c-1.2,25.3-1.5,50.6-2.5,75.9c-0.2,5.1,1.5,7,6.7,7c15-0.3,30-0.1,45-0.2c8.6,0,8.4,0.6,8.3-8.9
c-0.3-30.6-1-61.3-3.1-91.9C1956.3,852.9,1953.6,821.8,1947.7,789.1z M726,789.2c-3.3,16.4-5.1,31.3-6.7,46.2
c-4.6,42.4-6.3,85-7.3,127.7c-0.1,5.7,0,11.3-0.2,17c-0.1,3.4,1.4,4.8,4.7,4.8c17,0,34,0,51,0.1c3.6,0,4.6-1.9,4.4-5
c-0.1-2.3-0.1-4.7-0.1-7c-0.5-37.3-1.7-74.6-5-111.8c-1.4-15.9-2.8-31.9-5.5-47.6c-0.4-2.6-0.3-5.2-3.2-6.8
C747.6,801.1,737.2,795.4,726,789.2z M1625.1,910.5c-9.8,1.8-18,3.4-26.2,4.6c-23.3,3.5-46.2,9.9-70,10
c-10.9,0.1-21.8,2.4-32.7,3.7c-3.5,0.4-6.2,1.1-6,6c0.5,15,0.7,30,0.7,44.9c0,4.4,1.7,5.8,5.9,5.7c8.3-0.2,16.6,0,25-0.1
c30.6,0,61.3,0,91.9-0.1c9,0,9-0.1,9.2-9.4c0.2-13,0.2-26,0.6-38.9C1623.7,928.4,1624.5,919.8,1625.1,910.5z M1625.2,1071.1
c-2.2-23.8-2.3-46.1-2.4-68.4c0-4.5-1-6.6-6.1-6.6c-39.9,0.1-79.9,0.1-119.8-0.1c-5.1,0-6.1,2-6.1,6.5c0,14.6-0.1,29.3-0.6,43.9
c-0.1,4.5,1.6,5.6,5.6,6.1c12.6,1.4,25,3.8,37.7,4c8,0.1,15.9,1.1,23.8,2.5C1579.5,1063,1601.7,1066.9,1625.2,1071.1z
M1049.3,1070.4c2.6,1,4.9-0.1,7-0.6c23-5.5,46.3-8.9,69.7-12.3c16.7-2.5,33.7-3.8,50.6-5.2c5.1-0.4,6.8-1.9,6.6-7
c-0.4-14-0.3-27.9-0.5-41.9c-0.1-7-0.2-7.2-7.2-7.2c-39.3,0-78.5,0-117.8,0.1c-6.7,0-6.8,0.2-6.8,7.5c0,16.3-0.5,32.6-1.4,48.9
C1049.3,1058.6,1048.4,1064.5,1049.3,1070.4z M1049.8,910.2c-0.4,1.2-0.8,1.8-0.8,2.4c0.6,22.6,2.5,45.2,1.9,67.8
c-0.1,4.2,2.1,4.8,5.6,4.8c39.9,0,79.9-0.1,119.8,0.1c5.5,0,6.7-2.3,6.6-7.2c-0.2-11,0.1-22,0.2-32.9c0.1-15.9,0.1-15.5-16.3-17.4
c-16.5-1.9-33.1-2.4-49.6-5.2C1094.7,918.9,1072,915.9,1049.8,910.2z M1260.5,985.4c21.6,0,43.2,0,64.9,0c9.1,0,9.1-0.1,9.1-9.4
c0-10.3,0-20.6-0.1-30.9c-0.1-7.7-0.1-7.7-7.6-7.9c-2.3-0.1-4.7,0-7,0c-41.9,0-83.8-2.2-125.5-6.2c-4.6-0.4-5.9,0.9-6,5.1
c-0.2,13.6-0.6,27.3-0.8,40.9c-0.1,8.3,0.1,8.4,8.1,8.4C1217.3,985.4,1238.9,985.4,1260.5,985.4z M1485,931.5
c-3.1-1.4-5.4-0.6-7.7-0.4c-26.2,1.9-52.4,4.3-78.7,5.1c-17.3,0.5-34.6,1.3-51.9,1.1c-7.4-0.1-7.5,0.1-7.6,7.8
c-0.1,11.7,0,23.3-0.1,35c0,3.7,1,5.8,5,5.5c2-0.1,4,0,6,0c41.6,0,83.2,0,124.9,0c2,0,4-0.1,6,0c3.3,0.1,5-1.2,4.9-4.6
C1485.5,964.2,1485.3,947.6,1485,931.5z M1485,1051.1c0.3-17,0.5-33.6,0.9-50.1c0.1-4.2-2.2-4.8-5.7-4.8c-45,0.1-89.9,0.1-134.9,0
c-4.7,0-6.5,1.4-6.3,6.2c0.3,12,0.4,24,0,36c-0.2,5.2,2.1,6,6.5,6c11.6-0.1,23.3,0.1,34.9,0.4
C1415,1045.8,1449.5,1048.1,1485,1051.1z M1188.4,1050.6c47.5-3.4,93.2-6.8,139.2-6.4c0.7,0,1.3,0,2,0c3.2,0.2,4.9-0.9,4.9-4.5
c-0.1-13,0-25.9,0-38.9c0-3.4-1.6-4.8-4.9-4.6c-1.7,0.1-3.3,0-5,0c-42.6,0-85.1,0-127.7,0c-1.7,0-3.3,0.1-5,0
c-2.8-0.1-4.4,1.1-4.4,4.1C1187.9,1016.8,1188.1,1033.4,1188.4,1050.6z M2010.7,1230.1c0.9-7.3-0.9-13.8-1.5-20.3
c-6.8-66-10.3-132.2-10.7-198.6c-0.1-16.1-0.3-16.3-16.8-14.7c-2.9,0.3-4.2,1.1-4.1,4.1c0.1,3,0,6,0,9
c-0.3,41.3-2.1,82.6-6.1,123.8c-2,20.2-4.3,40.4-8.2,60.4c-0.9,4.8-0.2,7.8,4.5,10.4c10.2,5.7,19.9,12.2,29.8,18.3
C2001.8,1225,2005.6,1228.2,2010.7,1230.1z M2011.2,752.5c-2.7-0.8-3.6,0.3-4.7,1c-13.3,8.3-26.4,16.7-39.8,24.7
c-3.6,2.1-4.2,4.5-3.5,8.2c1.4,7.8,2.8,15.7,3.9,23.6c7.4,53.8,10.1,107.9,10.6,162.2c0.1,14.1-2.6,12.8,13.7,12.8
c7,0,7-0.2,7.1-7.3c0.4-27.3,0.6-54.6,1.8-81.8c1.4-30.9,3.1-61.8,6.1-92.6C2008,786.3,2009.6,769.5,2011.2,752.5z M662.1,1231.5
c3.7-2.4,5.6-3.7,7.5-4.9c11.6-7.7,22.9-16.2,35.5-22.2c5.3-2.5,6.2-5.5,5.1-11c-4.6-22.5-7-45.4-9-68.3
c-3.5-39.9-5-79.8-5.4-119.8c-0.1-8.6-0.2-8.6-8.8-8.7c-1.3,0-2.7,0-4,0c-7.9,0.1-8,0.1-7.9,8.4c0,24.7-0.6,49.3-1.6,74
c-1.3,32-3,63.9-6,95.8C665.8,1192.9,664.1,1211.1,662.1,1231.5z M662.9,752.1c-1,1.4-0.4,3-0.3,4.6c2,20.5,4.6,41.1,6,61.7
c3.8,52.5,6.5,105.1,6.4,157.8c0,8.8,0.1,8.9,8.5,8.9c14.5-0.1,12.1,1.3,12.3-12.1c1.1-62.7,3.5-125.2,14.8-187
c0.8-4.1-0.9-6-4.2-7.7c-13.4-6.8-25.6-15.5-38-23.9C666.8,753.1,665.5,751.1,662.9,752.1z M2044.3,728.6c-5.4,1.1-7.1,3.9-7.7,8.3
c-1.7,14.9-4,29.7-5.6,44.6c-4.6,42.1-7.2,84.2-8.9,126.5c-1,25.3-1.3,50.5-1.3,76.8c6.5,0,12.4-0.1,18.3,0
c3.8,0.1,5.5-1.4,5.2-5.3c-0.2-1.7,0-3.3,0-5c0-78.9,0-157.9,0-236.8C2044.3,734.9,2044.3,732,2044.3,728.6z M2044.3,1253
c0-3.7,0-6.4,0-9c0-78.6,0-157.2,0-235.8c0-13,1.3-11.5-11.9-11.5c-13,0-11.5-1.8-11.5,11.8c0,59.3,3.1,118.5,8.7,177.6
c1.9,19.5,4,39.1,7,58.5C2037.3,1248.9,2038.8,1251.8,2044.3,1253z M629.2,728.7c0,4.3,0,8,0,11.6c0,76.5,0,153.1,0,229.6
c0,15.8,0,15.8,16,15.3c7.6-0.2,7.6-0.2,7.7-7.7c0.1-3.3,0-6.7,0-10c-0.8-55.6-3.1-111.1-8.5-166.4c-2-21.2-5-42.3-7.3-63.4
C636.6,733.3,635,730.3,629.2,728.7z M629.2,1253.2c5.3-2.1,7.4-4.5,7.9-9.2c1.6-14.9,4-29.7,5.5-44.6
c6.5-62.4,9.5-124.9,10.3-187.6c0.2-15.6,0.1-15.6-15-15.6c-9,0-8.8-0.6-8.7,8.6c0,2,0,4,0,6c0,75.7,0,151.3,0,227
C629.2,1242.5,629.2,1247.2,629.2,1253.2z M2060,697.5c19-15.9,36.7-29.3,52.3-45.3c3.5-3.5,3.7-6.7,2.4-10.7
c-2.8-8.9-5.4-17.8-8.7-26.5c-3.5-9.3-7.4-18.5-13.7-26.4c-5.3-6.6-8-6.6-13.4-0.4c-1.3,1.5-2.7,3-3.6,4.7
c-8.9,15.7-17.3,31.5-15.3,50.5c0.4,3.6,0.1,7.3,0.1,11C2060,668,2060,681.5,2060,697.5z M613.2,1283.1
c-20.8,13.6-36.9,29.9-53.8,45.4c-2.8,2.6-2.3,5.1-1.5,8.1c4.1,14.7,8.6,29.3,15.3,43.2c2.6,5.4,5.4,10.7,9.8,15
c4.3,4.2,6.2,4.6,10,0.4c10.7-11.9,18.4-26.3,19.7-41.7C614.7,1330.6,613.2,1307.6,613.2,1283.1z M2060,1284.4
c0,19.5,0.7,36.8-0.2,54c-1.1,19.1,6.3,35.2,15.8,50.5c7.4,12.1,12.1,11.7,19.8-0.6c9.7-15.4,14.6-32.7,19.7-50
c1.1-3.7,0-6.2-2.6-8.8C2096.8,1313.6,2079.3,1299.9,2060,1284.4z M611.9,698.2c2.1-3.3,1.3-5.9,1.3-8.4c0.1-16-0.8-32,0.3-47.9
c1.2-18.4-6.2-33.8-15.1-48.6c-7.6-12.7-12.6-12.4-20.4,0.3c-10.1,16.3-15.1,34.6-20.4,52.8c-1,3.6,0.6,5.5,3,7.7
c8.8,8.1,17.3,16.7,26.3,24.5C594.8,685.4,603.3,691.5,611.9,698.2z M1788.2,864c-12.5,3.2-12.6,3.2-13.9,13.8
c-0.9,7.6-1.6,15.2-2.1,22.8c-1.8,26.2-2.9,52.5-3.2,78.8c0,4,0.8,6.3,5.4,6c4.6-0.3,9.3-0.2,14,0c5.1,0.1,6.9-2.1,6.7-7.3
c-0.5-11.3-0.2-22.6-0.7-33.9C1793.2,917.5,1792.3,890.9,1788.2,864z M885.3,1118.4c12.8-3,12.5-3,13.8-13.9
c3.5-29.5,4.6-59.1,5-88.8c0.3-23.5,5.2-18.7-19.8-19.4c-4.4-0.1-5.7,1.5-5.7,5.8C878.8,1040.8,880.3,1079.4,885.3,1118.4z
M885.7,863.5c-0.2,0.9-0.6,1.9-0.7,2.9c-4.3,33.7-6,67.5-6.1,101.4c0,5.5-2.5,12.6,1.3,16.2c3.8,3.6,10.8,1.1,16.3,1.2
c7.9,0.1,8.1-0.1,7.9-8.4c-0.6-33.2-1.5-66.4-5.3-99.4C897.9,865.6,898,865.6,885.7,863.5z M1788.4,1117.5
c4.4-34.4,5.9-68.6,6.5-102.9c0.1-5.6,2.4-12.7-0.9-16.5c-3.9-4.3-11.3-1.4-17.2-1.6c-7.6-0.2-7.7,0.1-7.7,7.6
c0,24.7,1.3,49.3,2.8,73.9c0.6,10.3,2.2,20.5,2.9,30.8c0.3,3.6,1.7,5.5,4.9,6.5C1782.2,1116,1784.4,1117.7,1788.4,1117.5z
M1481.1,1078.1c0-2.7,0.1-4.4,0-6c-0.3-7.8-0.3-8.1-8.2-8.7c-22.9-1.7-45.8-3.3-68.7-4.7c-19.9-1.2-39.9-0.9-59.9-1
c-4.6,0-5.5,2-5.4,6c0.1,3.7,0.3,6.7,5.2,6.2c1.3-0.1,2.7,0,4,0c34.3-0.3,68.4,1.9,102.5,5.6
C1460.6,1076.6,1470.4,1078.5,1481.1,1078.1z M1480,903c-15.3,1.6-30.8,3.6-46.3,4.8c-29.5,2.4-59.1,4.7-88.8,3.9
c-5.1-0.1-6.1,2-6,6.5c0.1,4.1,1.1,6.3,5.6,5.8c1.3-0.2,2.7,0,4,0c42.3,0.3,84.5-2.6,126.7-5.9c4.2-0.3,6.2-1.6,5.8-5.9
C1480.7,909.3,1482,906.3,1480,903z M1192.2,1078.2c3.1-0.2,5.4-0.3,7.6-0.5c37-4.6,74-7.6,111.3-7.8c6,0,12-0.1,18,0
c3.7,0.1,5.3-1.1,5.4-5.1c0.1-6.5,0-6.9-7.3-7c-40.6-0.8-81.1,1.7-121.6,4.6C1193.1,1063.3,1193.2,1063.6,1192.2,1078.2z
M1192.6,903.2c0.2,14.4,0.2,14.4,12.8,16c3,0.4,6,0.6,8.9,0.8c19.3,1.1,38.5,2,57.8,3.2c18.6,1.2,37.2,0.8,55.9,0.8
c6.4,0,6.3-0.5,6.6-6.8c0.2-5.8-3.3-5.5-7.1-5.4c-28.3,0.8-56.5-1.5-84.7-3.6C1226.3,907,1209.8,905.4,1192.6,903.2z M653.6,694.6
c2,12.3,3.8,22.4,5.1,32.6c0.5,3.7,1.7,6.1,4.8,8c14.2,8.8,27.5,18.8,42.2,26.6c2.7,1.4,6.2,6.3,8.9,2.4c2.7-3.7-2.6-6.2-5-8.5
c-19-17.3-36.7-35.8-51.8-56.7C657,698.1,656,697.3,653.6,694.6z M654,1284.6c0.8,0,1.3,0.1,1.5,0c1.4-1.4,2.9-2.7,4-4.3
c14.6-20.8,32.8-38.3,51.2-55.5c1.2-1.1,2.6-2.1,3.6-3.4c1.4-1.7,1.5-3.6-0.2-5.7c-2.9,0.3-5.3,2.1-7.8,3.6
c-10.2,6.3-19.8,13.5-30.5,18.8c-12.9,6.4-19.3,15.7-19.2,30.2C656.6,1273.7,654.9,1279.1,654,1284.6z M2019.1,698.8
c-0.5-0.3-1-0.6-1.5-0.9c-1,1.2-2,2.4-3,3.6c-6.8,9-13,18.4-21.4,26.1c-11,10.1-20.5,21.7-32.4,30.8c-1.9,1.5-3.6,3.2-2.1,5.9
c2,3.5,4.1,0.5,5.7-0.4c15.4-9.7,30.8-19.6,46.1-29.5c2.3-1.5,4-3.1,4.4-6.2C2016.2,718.5,2017.7,708.6,2019.1,698.8z
M2017.1,1283.9c0.7-0.2,1.4-0.5,2.1-0.7c-1.5-10.1-3-20.2-4.4-30.4c-0.4-2.6-1.9-4.1-3.9-5.4c-15.6-10-31.3-20.1-46.9-30
c-1.5-1-3.3-2.8-5-0.5c-1.7,2.4-0.3,4.2,1.7,5.9c4,3.4,8.1,6.9,11.8,10.6C1988.4,1249.3,2005.3,1264.3,2017.1,1283.9z M1044.6,1034
c0.2,0,0.3,0,0.5,0c0-11-0.1-21.9,0.1-32.9c0.1-4.2-2-4.8-5.5-4.9c-4.5-0.1-4.6,2.6-4.6,5.9c0.1,22.3,1.2,44.5,2.4,66.8
c0.1,1,0.2,2,0.4,3c0.2,1.2,1.2,1.8,2.1,1.7c0.8-0.1,1.9-1,2.2-1.7c0.5-1.5,0.7-3.2,0.8-4.9C1043.5,1055.9,1044,1045,1044.6,1034z
M1636.4,1033.8c0.7,0,1.4,0,2.1,0c0-10.6,0-21.2,0-31.8c0-2.7,0.2-5.3-3.6-5.8c-4.7-0.6-6,0.4-6,6.1c-0.1,21.9,0.2,43.7,2.1,65.6
c0.1,1.3,0.1,2.7,0.6,3.9c0.3,0.7,1.5,1.6,2.2,1.5c0.7,0,1.7-1.1,1.9-1.8c0.5-1.9,0.7-3.9,0.7-5.9
C1636.4,1055,1636.4,1044.4,1636.4,1033.8z M1638.5,948.1c-0.2,0-0.5,0-0.7,0c-0.5-11.6-1-23.3-1.6-34.9c-0.1-1.9,0.2-4.8-2.3-4.9
c-3.3-0.1-2.8,3-3,5.2c-0.3,3-0.4,6-0.7,9c-1.5,19-1.4,37.9-1.4,56.9c0,5.3,2,6.8,6.4,5.8c3.8-0.8,3.3-3.6,3.3-6.2
C1638.5,968.8,1638.5,958.4,1638.5,948.1z M1045.1,948c-0.1,0-0.3,0-0.4,0c-0.6-11.9-1.2-23.9-1.9-35.8c-0.1-1.9-0.4-4.3-3.2-3.9
c-2.2,0.3-1.9,2.4-2,4c-1.3,22.5-2.3,45.1-2.5,67.7c0,3.7,1.2,5.4,5.1,5.4c4.1,0.1,5-1.9,5-5.5C1045,969.3,1045.1,958.7,1045.1,948
z M741.9,740.1c-4.9,5.5-6.3,11-8.3,16.2c-0.8,2-0.8,4,1.2,5.3c6.2,4.1,11.6,9.4,19.7,13C751.6,762.1,748.8,750.9,741.9,740.1z
M1931.7,1241.3c11.7-16.6,11.5-18.2-3-28.3c-2.7-1.9-4.9-4.5-9.1-4.9C1921.9,1219.6,1924.7,1230.7,1931.7,1241.3z M1931.6,740
c-6.6,10.9-9.8,22-12.1,33.9c4-0.9,6.1-3.1,8.5-4.8C1943.5,758.3,1943.6,757.2,1931.6,740z M742.1,1241.3
c6.6-11.1,9.8-21.9,12.2-34.3c-8,3.6-13.2,8.8-19.2,12.6c-2,1.3-2.3,3.1-1.6,5.1C735.6,1230.2,736.7,1236.2,742.1,1241.3z
M597.6,1464.4c16.9-6.5,17.3-8.9,14-24.4C605.8,1447.2,601.7,1455,597.6,1464.4z M2061.5,1442.5c-5.1,14.3,5.6,17.3,12.5,22.8
C2072.1,1457,2066.2,1451,2061.5,1442.5z M1781.6,836.9c-1,5.2-3.6,8.5-1.8,12.6C1785.7,846.2,1785.7,846.2,1781.6,836.9z
M895.4,849.3c-1.5-4.5-1.6-7.9-3.8-11.1C887.9,846.3,887.9,846.3,895.4,849.3z M630.1,1284.6c-0.4-5.3,4.1-10.2,1.8-17
C627.4,1273.6,629.9,1279.2,630.1,1284.6z M629.9,697.1c0.2,5.4-3,11.4,2.4,16.6C633.8,707.4,630.4,702.5,629.9,697.1z
M2044.5,1284.5c-1-5.2,1.5-10.3-2.8-15C2040.3,1275.4,2042.4,1279.6,2044.5,1284.5z M2041.9,712.1c4.1-4.8,1.6-10,2.2-14.6
C2043,702.1,2039.8,706.2,2041.9,712.1z M892.2,1143.9c0.4-4.2,3.5-7.3,1.6-11.5C887.6,1136,887.6,1136,892.2,1143.9z
M1779.6,1132.2c-1,2.9-0.4,4.8,0.4,6.5c0.7,1.5,0.4,3.6,2.3,4.6C1785.7,1135.3,1785.7,1135.3,1779.6,1132.2z M729.5,774
c1.5,1.6,2.6,4,5.8,3.2C733.9,774.7,732.2,773.5,729.5,774z M737.9,778.4c-0.2,0.2-0.5,0.4-0.7,0.6c0.2,0.2,0.4,0.6,0.6,0.6
c0.2,0,0.5-0.3,0.7-0.5C738.4,778.8,738.1,778.6,737.9,778.4z"/>
<path class="st1" d="M534.9,1353.5c4.5,11.6,7.4,22.2,12.2,32c4.7,9.6,9.8,18.8,17.1,26.8c10.6,11.6,28.8,15.5,42.7,2.9
c1.4-1.3,3.1-2.4,5.5-4.2c1.4,7.2,1.7,13.2-2.5,19.1c-8.4,11.7-14.8,24.4-20.8,37.4c-1.7,3.7-3.7,6.7-7.3,9.1
c-52.7,33.7-105.3,67.6-158,101.4c-1.1,0.7-2.1,1.7-4.1,1.1c0-195.1,0-390.4,0-585.6c2.1-1.2,4.1-0.9,6-0.8
c21.6,0.5,43.3,1.1,64.9,1.3c5.3,0,6.7,1.8,6.7,6.9c0.5,40,1.4,79.9,3.9,119.8c2.4,38.2,5.6,76.4,10.6,114.3
c4.6,34.7,10.1,69.1,19.2,103c1.1,4,0.6,7.1-2.3,10c-0.9,0.9-1.7,2.1-2.6,3.1c-22.1,25.1-40.6,52.7-57.5,81.5
c-11.6,19.7-19.9,41-28.9,61.9c14.9-23.9,25.7-50.1,41.8-73.5C497.6,1397.6,514.9,1375.3,534.9,1353.5z"/>
<path class="st1" d="M534.6,627.2c-40-41.2-69.6-89-94.2-140.1c0.7,6.2,4,11.2,6.2,16.6c12.7,31.7,30.5,60.6,49.6,88.7
c9.3,13.7,20.4,26.3,31,39.1c3.8,4.5,5,8.7,3.4,14.4c-9.6,35.4-15,71.6-19.6,107.9c-9.4,74.5-12.5,149.3-13.7,224.3c0,1-0.1,2,0,3
c0.6,4.7-1.2,6.5-6.1,6.4c-8.3-0.2-16.7,0.2-25,0.4c-13.7,0.3-27.3,0.4-41,1c-4.8,0.2-6.8-1.2-6.2-6.1c0.3-2,0-4,0-6
c0-187.9,0-375.8,0-563.7c0-3.5,0-7.1,0-10.7c4.2-0.1,6.3,2.3,8.7,3.9c50.4,32.4,100.8,64.9,151.3,97.1c5,3.2,8.4,7.1,10.8,12.4
c5.1,11.2,10.4,22.4,17.8,32.3c4.9,6.6,6.7,13.5,5.1,21.9c-3.6-0.1-5.6-3.1-7.9-5c-12-9.8-28.1-7.7-38.9,2.8
c-12.6,12.2-19,27.9-25.2,43.7C538.8,616.5,538.2,622.2,534.6,627.2z"/>
<path class="st1" d="M2060.9,1411.1c7.4,4.3,12.3,10.3,20.6,11.4c9.8,1.4,17.7-1.4,24.8-7.5c8.5-7.3,13.9-16.9,18.8-26.8
c5.1-10.4,8.6-21.5,12.7-32.5c24.7,24.7,73.3,78.9,107.2,169.7c-1.1-8.8-3.5-17.2-6.8-25.4c-13.2-33.9-29.3-66.4-49.3-96.8
c-12.8-19.5-27.4-37.7-42.4-55.5c-3.4-4-4.8-7.1-3.5-12.3c12.8-48.8,19.1-98.6,24-148.7c5.9-59.7,8.4-119.6,8.9-179.7
c0-1.3,0-2.7,0.1-4c0.2-7,0.2-7.1,7-7.4c20-0.7,40-1.3,60-1.9c2.3-0.1,4.7,0,7-0.1c3.5-0.2,4.8,1.4,4.6,4.8c-0.1,2,0,4,0,6
c0,188.3,0,376.6,0,565c0,3.2,0,6.4,0,9.8c-4.5-0.2-6.9-2.8-9.5-4.5c-47.7-30.6-95.2-61.4-143-91.7c-9-5.7-16-11.6-19.9-22.4
c-3.7-10.1-11.2-18.9-17.1-28.2C2060.8,1426.2,2058.6,1419.6,2060.9,1411.1z"/>
<path class="st1" d="M2244.8,454.3c-23,66.2-59.6,121.8-106.6,172.4c-4.5-11.6-7.7-22.6-12.8-32.9c-4-8.1-8.4-15.8-14.4-22.7
c-12.6-14.4-29.9-17.7-45.6-3.8c-1.1,1-2,2.7-4.6,2.6c-1.4-5.5-1.1-10.5,2-15.7c8.8-14.9,18.9-29,25.8-45c1-2.3,2.7-3.7,4.7-4.9
c52.7-33.9,105.3-67.7,158-101.5c0.5-0.3,1.3-0.2,2.1-0.4c2.3,2.1,1.2,4.9,1.2,7.3c0.1,189.6,0.1,379.3,0.1,568.9
c0,1.7-0.1,3.3,0,5c0.2,3.6-1.7,4.6-4.9,4.5c-5-0.2-10-0.3-15-0.5c-17-0.5-34-1-51-1.4c-7.4-0.2-7.5-0.2-7.6-7.8
c-0.5-35.3-1.1-70.6-3.1-105.9c-2.6-45.9-6.4-91.7-12.8-137.3c-4.2-30.3-9.4-60.5-17.3-90.2c-1.1-4-0.2-6.6,2.5-9.8
c27.4-31.6,51.5-65.6,70.4-102.8c11.3-22.2,21.7-44.9,28.2-69.2C2244.6,460.9,2245.4,458.8,2244.8,454.3z"/>
<path class="st1" d="M553.7,662.8c3.8,0.2,5.2,2.8,6.9,4.6c14.5,15.2,30.3,29,46.9,41.9c4.1,3.2,5.8,6.4,5.8,11.7
c-0.2,85.6-0.1,171.2-0.1,256.8c0,7.5-0.1,7.7-7.6,7.8c-25,0.4-50,0.6-74.9,1.1c-4.3,0.1-5.9-1.2-5.7-5.6c0.4-13,0.2-26,0.5-39
c1.5-60.3,4.7-120.5,11.8-180.4c3.7-31.7,8.3-63.3,15.4-94.5C553,665.6,553.4,664.1,553.7,662.8z"/>
<path class="st1" d="M553.6,1319.7c-22.4-107.4-27.4-215.3-28.7-323.3c3.3-2.5,6.4-1.5,9.3-1.5c23.3,0.3,46.7,0.7,70,1.1
c9.5,0.1,9.2-1.2,9.2,9.2c0,72.3,0,144.7,0,217c0,13.3-0.1,26.7,0.1,40c0,3.6-0.7,6.2-3.8,8.6c-18.7,14.5-36.3,30.3-52.7,47.4
C556.3,1318.5,555.6,1318.7,553.6,1319.7z"/>
<path class="st1" d="M2119.8,1319.2c-6.7-6.5-12.1-12.3-18-17.5c-11.8-10.2-23.8-20.2-36-30.1c-4.2-3.4-5.9-7.1-5.9-12.7
c0.2-84.6,0.1-169.1,0.1-253.7c0-8.4,0.1-8.5,8-8.5c24.3-0.2,48.6-0.4,72.9-0.5c7.5,0,7.7,0,7.6,7.7c-0.3,45.6-1.9,91.2-5.1,136.7
c-2.7,39.9-6.7,79.6-12.5,119.1C2128.1,1279,2124.6,1298.3,2119.8,1319.2z"/>
<path class="st1" d="M2120.4,664.4c12.3,57.5,18.7,115.7,22.8,174.2c3.1,44.9,4.7,89.8,5.2,134.8c0,2.3-0.1,4.7,0.1,7
c0.3,4-1.7,5.2-5.4,5.1c-6.7-0.2-13.3-0.3-20-0.3c-18.3-0.1-36.7-0.2-55-0.4c-8-0.1-8.2-0.1-8.2-8.2c0-52.3,0-104.6,0-157
c0-32,0.2-64-0.2-96c-0.1-6.3,2-10.3,6.8-14.3c17.1-14.3,35.1-27.7,50.2-44.2C2117.5,664.3,2118.3,663.8,2120.4,664.4z"/>
<path class="st1" d="M878.7,1138.5c2,4.9,3.2,9.1,5.3,12.8c4,7.2,10.5,7.1,14.6,0c3-5.3,4.3-11.2,5.2-17.1c0.8-4.7,3.2-7.1,7.8-8.9
c37.6-14.8,76.3-25.6,115.5-34.7c5-1.1,6.6,0.4,7.3,4.9c0.6,3.5,0.7,8.4,5.3,8.5c5.3,0.1,5.5-5.2,5.8-9c0.5-7.1,3.9-10,10.9-11.2
c40-7,79.8-14.9,120.4-18c2.4-0.2,5.9-2.1,6.9,1.5c1,3.7,1.8,7.8,0.4,11.7c-0.7,1.8-2.9,1.2-4.4,1.5c-9.5,1.6-19.1,2.8-28.6,4.6
c-39.6,7.4-78.6,17-117,29.2c-59.9,19-116.5,45.2-170.8,76.4c-41.6,24-80.2,52.3-117.2,83.1c-39.4,32.8-74.3,69.6-105.4,110.2
c-3,3.9-6.2,7.7-9.4,11.6c-0.1,0.2-0.6,0.2-2.1,0.5c0.2-7-1.1-13.6,1.5-20.3c7.6-19.2,12.9-39.2,16.7-59.5
c1.8-9.6,6.2-17.5,12-24.8c17.2-21.7,36.5-41.5,58.8-58.9c3.4,2.2,3.5,6.6,5.4,9.8c4,7,7.8,14.4,17.4,14.4
c9.6,0,13.8-7.2,17.3-14.4c6.5-13.3,9.4-27.7,12.2-42c0.8-3.9,2.2-6.5,5.7-8.6C808.9,1171.4,842.7,1153.5,878.7,1138.5z"/>
<path class="st1" d="M1032.8,891.9c-42-9-82.2-20.4-121.3-35.7c-4.8-1.9-6.9-4.4-7.7-9c-0.9-5.6-2.3-11.1-4.8-16.2
c-1.5-3.2-3.7-5.9-7.3-6.1c-4.1-0.2-6.8,2.5-8.3,6.1c-1.6,3.6-2.8,7.4-4.7,12.2c-11.4-5.2-22.6-10-33.5-15.2
c-23.1-11.1-45.4-23.7-67.3-37.1c-4.3-2.6-6.6-5.5-7.5-10.7c-2.1-11.4-4.6-22.8-8.9-33.7c-2.1-5.3-4.2-10.5-7.8-15
c-7.6-9.4-17.5-9.4-25.3,0.1c-4.2,5.2-6,11.7-9.8,18.2c-11.1-9.9-22.3-18.8-32.2-28.9c-10.2-10.4-19.5-21.8-28.7-33.3
c-5.6-7-8.9-15.3-10.6-24.4c-3.7-19.3-8.6-38.3-16-56.5c-2.9-7.1-1.4-14.1-0.5-21.8c3.9,4.8,7.9,9.4,11.6,14.3
c23.7,32,51,60.7,80.1,87.7c19,17.6,39.4,33.9,59.9,49.7c45,34.7,93.8,63.3,144.7,87.9c39.2,18.9,79.8,34.7,121.6,47.1
c41.2,12.2,82.8,22.3,125.4,28.2c1,0.1,2,0.2,3,0.4c7.3,1.2,10.5,7.7,7,14.2c-1.3,2.3-3.4,1.7-5.3,1.6c-8-0.7-15.9-1.5-23.8-2.4
c-32.8-3.7-65.1-10.2-97.5-15.7c-7.1-1.2-11.5-3.9-11.7-11.6c0-1.3-0.4-2.6-0.8-3.9c-0.8-2.6-1.9-5.3-5.3-4.9
c-2.7,0.3-3.7,2.6-4.4,4.9C1034.4,885.3,1033.7,888.2,1032.8,891.9z"/>
<path class="st1" d="M1953.8,1233.7c3.8,1,5.8,3.4,8,5.3c21,19.1,40.7,39.4,57.6,62.2c3,4.1,4.9,8.4,6,13.5
c4.5,20.5,9.3,40.9,17.1,60.4c1.3,3.1,2.4,6.3,1.8,9.7c-0.6,3.6,2.4,8.9-2.1,10.4c-2.5,0.8-5.4-3.9-7.3-6.7
c-35.7-51.5-81-93.6-129.4-132.6c-40.8-32.9-84.7-61.2-131-85.6c-39.5-20.8-80.6-37.9-123-52c-45.9-15.2-92.6-26.8-140.3-34.3
c-6.2-1-12.4-2.7-18.7-3.1c-6.5-0.4-5.5-4.5-5.2-8.4c0.4-5.4,2.1-7.1,7.5-6.6c8.3,0.7,16.5,1.9,24.8,2.7
c33.2,3.4,65.8,10.1,98.6,15.7c6.3,1.1,9.2,3.4,9.6,10c0.2,3.6,0.3,9.3,5.3,9.6c6,0.4,5.5-5.6,6.3-9.5c0.7-3.9,2.4-5,6.2-4
c39.4,9.7,78.9,19.2,116.7,34.3c4.5,1.8,6.8,4.2,7.5,9.1c0.9,5.9,2.2,11.8,5.1,17.2c1.7,3.1,3.8,5.6,7.6,5.6c3.8,0,6-2.5,7.6-5.6
c0.9-1.8,1.7-3.6,2.3-5.5c2.8-7.7,2.9-7.9,9.9-4.7c32.6,15.3,64.6,31.4,95,50.8c3.4,2.2,5.1,4.7,5.8,8.6
c2.1,12.1,4.9,24.1,9.4,35.7c1.9,5,4.1,9.8,7.5,14c7.5,9.4,17.7,9.4,25.3,0C1948.9,1245.3,1950.9,1239.5,1953.8,1233.7z"/>
<path class="st1" d="M2043.6,585.1c0.9,7.5,1.7,14.1-0.8,20.4c-8,20.2-13.1,41.2-17.6,62.4c-0.9,4.4-2.6,8.3-5.2,11.7
c-17.8,23.8-38.4,45.2-60.6,64.9c-1.5,1.3-3.2,2.3-5.1,3.8c-2.9-5.3-5.1-10.4-8.2-14.9c-8-11.8-19.3-11.8-27.5-0.1
c-7.4,10.6-11.4,23-12.8,35.4c-1.7,14.6-9.4,22.2-21.3,29.3c-25.5,15.2-52.1,28.1-78.7,41.2c-2.4,1.2-5,2-7.3,3.1
c-2.7,1.3-4.3,0.7-5.2-2.2c-1-3.2-1.9-6.4-3.4-9.3c-4.2-7.7-10.8-7.7-15.1-0.1c-2.5,4.4-3.9,9.3-4.5,14.2c-0.8,7-4.2,10.7-11,13.3
c-27.5,10.2-55.4,18.9-83.9,25.7c-10,2.4-20.1,4.8-30,7.5c-3.8,1-5.4,0-6.1-3.9c-0.7-4-0.2-9.9-6.2-9.6c-5.1,0.2-5.3,6-5.4,9.5
c-0.1,7.5-4.3,9-10.5,10.2c-39.6,7.4-79.4,13.9-119.5,18.1c-2.8,0.3-6.7,2.3-8.1-1.3c-1.4-3.6-1.7-7.9-0.3-11.7
c0.9-2.4,3.9-2,6.2-2.4c13.8-2.4,27.6-4.4,41.3-7c42.1-8.2,83.5-19.3,124.2-33.1c57.6-19.6,112.2-45.7,163.9-77.3
c38.1-23.2,73.8-50.1,107.6-79.4c37.9-32.8,72.6-68.6,102.5-109C2037.5,591.3,2039.4,587.8,2043.6,585.1z"/>
<path class="st1" d="M777.2,1163.8c-0.9-7.9,1.2-15.4,2-23c4.4-45.1,6.3-90.3,6.6-135.6c0.1-8.6,0.2-8.7,8.8-8.7
c22-0.1,44-0.1,65.9,0c8.3,0,8.4,0.1,8.5,7.9c0.2,37.7,2,75.2,6.5,112.6c0.5,3.8,0.3,6-3.9,7.5c-31.7,11.3-62.2,25.3-92.6,39.6
C778.8,1164.3,778.4,1164.1,777.2,1163.8z"/>
<path class="st1" d="M776.8,817.8c3.1-0.6,4.7,0.6,6.5,1.4c28.2,13.5,56.6,26.5,86.2,36.8c5.4,1.9,6.7,4.3,6,10.1
c-4.5,36.7-6.3,73.6-6.4,110.6c0,8.1,0.7,8.3-8.1,8.3c-22.6-0.1-45.3,0-67.9-0.1c-7,0-7.1-0.2-7.2-7.3c-0.4-28.6-0.9-57.3-2.7-85.9
c-1.3-21.3-2.9-42.5-5.5-63.7C777.2,824.9,777.1,821.6,776.8,817.8z"/>
<path class="st1" d="M1896.4,1163.9c-3.3,0.5-5.7-1.5-8.4-2.8c-27.2-13.3-54.8-25.7-83.3-36.1c-7.4-2.7-7.2-2.9-6.3-10.6
c4.4-35.8,5.3-71.8,6.4-107.7c0.1-1.7,0.1-3.3,0-5c-0.3-3.8,1.3-5.4,5.2-5.3c6.3,0.2,12.7,0.1,19,0.1c16.7,0,33.3,0,50,0.1
c8.4,0,8.4,0.1,8.6,8.8c0.8,30.3,1.2,60.6,3.3,90.9c1.4,19.9,2.9,39.9,5.3,59.7C1896.5,1158.6,1897.4,1161.2,1896.4,1163.9z"/>
<path class="st1" d="M1896.8,817.7c-1.3,13.9-2.7,27.7-3.9,41.6c-3.5,38.5-4.5,77-5.2,115.6c0,1.7-0.2,3.3-0.1,5
c0.4,4-1.5,5.5-5.2,5.1c-1-0.1-2,0-3,0c-22.6,0-45.3-0.1-67.9,0.2c-5,0-6.8-1.2-6.8-6.6c-0.4-28.3-1.2-56.5-3.6-84.7
c-0.8-8.9-1.6-17.9-2.7-26.8c-0.9-7.6-1.1-7.8,6.5-10.6c23-8.7,45.8-17.8,67.9-28.7c6-2.9,12-5.7,18-8.6
C1892.5,818.3,1894.3,817.4,1896.8,817.7z"/>
<path class="st1" d="M908.3,1109.6c-1-4.4,0.4-8.6,0.8-12.8c2.5-30.9,4-61.8,4-92.9c0-7.4,0.2-7.5,7.9-7.5
c33.3-0.1,66.7-0.2,100-0.2c7.8,0,8.1,0.1,8,7.4c-0.3,22,1,44,2.3,65.9c0.3,4.5-1,6.5-5.5,7.4c-38.7,8.5-76.4,20.9-114.2,32.5
C910.7,1109.8,909.7,1109.6,908.3,1109.6z"/>
<path class="st1" d="M1765.3,1110.4c-9.6-3-18.1-5.6-26.6-8.4c-29.1-9.6-58.7-17.4-88.5-24.5c-8.9-2.1-8.1-1.4-7.6-10.4
c1.2-21,1.6-42,2.3-62.9c0.2-7.7,0.2-7.9,7.5-7.9c33.3,0,66.7,0.1,100,0.2c7.8,0,8.2,0.3,8.2,7.7c0.2,34,1.7,68,4.8,101.8
C1765.4,1106.9,1765.3,1107.9,1765.3,1110.4z"/>
<path class="st1" d="M908.8,872.2c2.8-1.1,5,0.8,7.5,1.5c36.1,10.3,71.6,22.9,108.5,30.5c5.5,1.1,6.7,3.6,6.4,8.8
c-1.3,21.9-2.6,43.9-2.1,65.9c0.1,4.7-1.2,6.5-6.1,6.4c-34.7-0.2-69.3-0.2-104-0.1c-4.1,0-6.1-1-6-5.6c0.4-35.6-2-71.2-4.8-106.7
C908.2,872.8,908.6,872.5,908.8,872.2z"/>
<path class="st1" d="M1765.6,872.4c-0.9,10.7-1.8,20.9-2.5,31.2c-1.7,24.6-2.4,49.2-2.6,73.8c-0.1,7.7-0.2,7.8-7.7,7.8
c-33.6,0.1-67.2,0.1-100.8,0.1c-6.9,0-6.9-0.2-7.1-7.1c-0.8-21.9-1.3-43.9-2.6-65.8c-0.3-5.4,1.8-7,6.2-8
c14.5-3.5,29-7.1,43.5-10.8c22.5-5.7,44.5-13.5,66.7-20.3C1760.5,872.7,1762.3,871.8,1765.6,872.4z"/>
<path class="st1" d="M725.8,1192.5c-5.9-34.3-8.8-67.4-11-100.6c-1.8-27.6-2.2-55.2-3-82.8c-0.1-2.3,0.1-4.7-0.1-7
c-0.4-4.4,1.6-5.8,5.9-5.7c9,0.2,18,0,27,0c6.7,0,13.3-0.1,20,0c7.2,0.1,7.4,0.2,7.3,7.1c-0.5,23.6-1,47.2-2,70.8
c-1.2,27.2-3.3,54.4-6.7,81.5c-0.2,1.3-0.3,2.6-0.4,4c-0.4,9.2-3.4,16-13,19.7C741.7,1182.7,734.5,1187.7,725.8,1192.5z"/>
<path class="st1" d="M1946.2,1191.7c-10.1-5.6-20.4-11.5-30.8-17.2c-3.3-1.8-3.1-5-3.5-7.8c-1.4-10.2-2.4-20.5-3.5-30.7
c-4.7-43.8-6-87.7-6.7-131.6c-0.1-7.7,0.1-7.8,7.7-7.8c15.3-0.1,30.6,0.1,46-0.1c4.6-0.1,6.5,1.1,6.4,6.1
c-0.5,34-1.2,67.9-3.7,101.8c-2,26.9-4.5,53.7-8.9,80.4C1948.7,1187,1949.1,1189.5,1946.2,1191.7z"/>
<path class="st1" d="M1947.7,789.1c5.8,32.7,8.6,63.8,10.8,95c2.1,30.6,2.8,61.2,3.1,91.9c0.1,9.6,0.3,8.9-8.3,8.9
c-15,0-30-0.1-45,0.2c-5.3,0.1-6.9-1.8-6.7-7c0.9-25.3,1.3-50.6,2.5-75.9c1.3-27.3,3.9-54.4,6.9-81.6c0.9-8.3,3.3-14.2,11.5-17.8
C1930.8,799.2,1938.6,794.1,1947.7,789.1z"/>
<path class="st1" d="M726,789.2c11.3,6.2,21.7,11.9,32.2,17.5c2.9,1.6,2.8,4.2,3.2,6.8c2.7,15.8,4.1,31.7,5.5,47.6
c3.3,37.2,4.5,74.5,5,111.8c0,2.3,0,4.7,0.1,7c0.1,3.1-0.8,5-4.4,5c-17-0.1-34-0.1-51-0.1c-3.3,0-4.8-1.5-4.7-4.8
c0.2-5.7,0-11.3,0.2-17c1-42.6,2.7-85.2,7.3-127.7C720.9,820.5,722.6,805.7,726,789.2z"/>
<path class="st1" d="M1625.1,910.5c-0.6,9.4-1.4,18-1.7,26.5c-0.4,13-0.4,26-0.6,38.9c-0.1,9.3-0.2,9.4-9.2,9.4
c-30.6,0.1-61.3,0.1-91.9,0.1c-8.3,0-16.7-0.1-25,0.1c-4.2,0.1-5.9-1.3-5.9-5.7c0-15-0.1-30-0.7-44.9c-0.2-4.8,2.5-5.5,6-6
c10.9-1.3,21.8-3.6,32.7-3.7c23.9-0.2,46.7-6.5,70-10C1607.1,913.9,1615.2,912.2,1625.1,910.5z"/>
<path class="st1" d="M1625.2,1071.1c-23.5-4.2-45.7-8.1-67.9-12.1c-7.9-1.4-15.8-2.4-23.8-2.5c-12.7-0.1-25.1-2.6-37.7-4
c-4-0.4-5.8-1.6-5.6-6.1c0.5-14.6,0.6-29.3,0.6-43.9c0-4.5,1-6.5,6.1-6.5c39.9,0.2,79.9,0.2,119.8,0.1c5,0,6.1,2.1,6.1,6.6
C1622.9,1025,1623,1047.3,1625.2,1071.1z"/>
<path class="st1" d="M1049.3,1070.4c-1-5.9-0.1-11.8,0.3-17.7c0.9-16.3,1.4-32.6,1.4-48.9c0-7.3,0.2-7.5,6.8-7.5
c39.3-0.1,78.5-0.1,117.8-0.1c7,0,7.1,0.2,7.2,7.2c0.2,14,0,27.9,0.5,41.9c0.2,5.2-1.5,6.6-6.6,7c-16.9,1.3-33.8,2.7-50.6,5.2
c-23.3,3.4-46.7,6.8-69.7,12.3C1054.2,1070.3,1051.9,1071.4,1049.3,1070.4z"/>
<path class="st1" d="M1049.8,910.2c22.1,5.7,44.8,8.7,67.4,12.5c16.4,2.8,33.1,3.3,49.6,5.2c16.4,1.9,16.4,1.5,16.3,17.4
c-0.1,11-0.3,22-0.2,32.9c0.1,4.9-1,7.2-6.6,7.2c-39.9-0.2-79.9-0.2-119.8-0.1c-3.4,0-5.7-0.6-5.6-4.8c0.6-22.6-1.3-45.2-1.9-67.8
C1049,912,1049.4,911.4,1049.8,910.2z"/>
<path class="st1" d="M1260.5,985.4c-21.6,0-43.2,0-64.9,0c-8,0-8.2-0.1-8.1-8.4c0.2-13.6,0.6-27.3,0.8-40.9c0.1-4.2,1.4-5.6,6-5.1
c41.7,4.1,83.5,6.2,125.5,6.2c2.3,0,4.7,0,7,0c7.5,0.2,7.5,0.2,7.6,7.9c0.1,10.3,0.1,20.6,0.1,30.9c0,9.3,0,9.4-9.1,9.4
C1303.8,985.4,1282.2,985.4,1260.5,985.4z"/>
<path class="st1" d="M1485,931.5c0.3,16.2,0.5,32.7,0.8,49.3c0.1,3.4-1.7,4.8-4.9,4.6c-2-0.1-4,0-6,0c-41.6,0-83.2,0-124.9,0
c-2,0-4-0.1-6,0c-4,0.3-5.1-1.8-5-5.5c0.1-11.7,0-23.3,0.1-35c0.1-7.6,0.1-7.8,7.6-7.8c17.3,0.2,34.6-0.6,51.9-1.1
c26.3-0.7,52.5-3.1,78.7-5.1C1479.6,930.9,1481.8,930,1485,931.5z"/>
<path class="st1" d="M1485,1051.1c-35.4-3-69.9-5.3-104.5-6.4c-11.6-0.4-23.3-0.5-34.9-0.4c-4.4,0-6.7-0.8-6.5-6
c0.4-12,0.3-24,0-36c-0.1-4.8,1.6-6.3,6.3-6.2c45,0.1,89.9,0.1,134.9,0c3.5,0,5.8,0.7,5.7,4.8
C1485.5,1017.5,1485.3,1034.1,1485,1051.1z"/>
<path class="st1" d="M1188.4,1050.6c-0.3-17.2-0.5-33.7-0.8-50.3c0-2.9,1.6-4.1,4.4-4.1c1.7,0,3.3,0,5,0c42.6,0,85.1,0,127.7,0
c1.7,0,3.3,0.1,5,0c3.3-0.2,4.9,1.2,4.9,4.6c-0.1,13-0.1,25.9,0,38.9c0,3.6-1.7,4.7-4.9,4.5c-0.7,0-1.3,0-2,0
C1281.6,1043.7,1235.9,1047.2,1188.4,1050.6z"/>
<path class="st1" d="M2010.7,1230.1c-5.1-1.9-9-5.1-13.1-7.7c-9.9-6.1-19.6-12.6-29.8-18.3c-4.8-2.6-5.5-5.6-4.5-10.4
c3.9-20,6.2-40.1,8.2-60.4c4-41.2,5.8-82.4,6.1-123.8c0-3,0.1-6,0-9c-0.1-3,1.1-3.8,4.1-4.1c16.5-1.5,16.7-1.4,16.8,14.7
c0.4,66.4,3.9,132.5,10.7,198.6C2009.9,1216.3,2011.6,1222.7,2010.7,1230.1z"/>
<path class="st1" d="M2011.2,752.5c-1.6,16.9-3.2,33.8-4.8,50.6c-3,30.8-4.6,61.7-6.1,92.6c-1.3,27.3-1.5,54.5-1.8,81.8
c-0.1,7.1-0.2,7.3-7.1,7.3c-16.3,0-13.6,1.3-13.7-12.8c-0.5-54.3-3.2-108.4-10.6-162.2c-1.1-7.9-2.5-15.8-3.9-23.6
c-0.7-3.6-0.1-6,3.5-8.2c13.4-8,26.6-16.4,39.8-24.7C2007.6,752.8,2008.5,751.7,2011.2,752.5z"/>
<path class="st1" d="M662.1,1231.5c1.9-20.4,3.7-38.6,5.4-56.8c3-31.9,4.7-63.8,6-95.8c1-24.6,1.6-49.3,1.6-74
c0-8.3,0.1-8.3,7.9-8.4c1.3,0,2.7,0,4,0c8.6,0.1,8.7,0.1,8.8,8.7c0.4,40,1.8,80,5.4,119.8c2,22.9,4.4,45.8,9,68.3
c1.1,5.5,0.2,8.5-5.1,11c-12.7,6-23.9,14.5-35.5,22.2C667.7,1227.9,665.8,1229.1,662.1,1231.5z"/>
<path class="st1" d="M662.9,752.1c2.6-0.9,3.9,1,5.4,2.1c12.4,8.4,24.6,17.1,38,23.9c3.3,1.7,5,3.6,4.2,7.7
c-11.3,61.8-13.7,124.4-14.8,187c-0.2,13.5,2.2,12.1-12.3,12.1c-8.4,0-8.5-0.1-8.5-8.9c0.1-52.7-2.6-105.3-6.4-157.8
c-1.5-20.6-4-41.1-6-61.7C662.5,755.1,661.9,753.4,662.9,752.1z"/>
<path class="st1" d="M2044.3,728.6c0,3.4,0,6.3,0,9.2c0,78.9,0,157.9,0,236.8c0,1.7-0.1,3.3,0,5c0.4,3.9-1.3,5.4-5.2,5.3
c-5.9-0.2-11.9,0-18.3,0c0-26.3,0.3-51.6,1.3-76.8c1.7-42.3,4.4-84.5,8.9-126.5c1.6-14.9,3.9-29.7,5.6-44.6
C2037.2,732.4,2038.9,729.7,2044.3,728.6z"/>
<path class="st1" d="M2044.3,1253c-5.6-1.2-7-4.1-7.7-8.6c-3.1-19.4-5.2-38.9-7-58.5c-5.6-59-8.7-118.2-8.7-177.6
c0-13.6-1.4-11.7,11.5-11.8c13.3,0,11.9-1.5,11.9,11.5c0,78.6,0,157.2,0,235.8C2044.3,1246.6,2044.3,1249.3,2044.3,1253z"/>
<path class="st1" d="M629.2,728.7c5.8,1.6,7.4,4.6,7.9,9c2.3,21.2,5.3,42.2,7.3,63.4c5.3,55.3,7.7,110.8,8.5,166.4
c0,3.3,0.1,6.7,0,10c-0.1,7.5-0.1,7.5-7.7,7.7c-16,0.5-16,0.5-16-15.3c0-76.5,0-153.1,0-229.6C629.2,736.7,629.2,733,629.2,728.7z"
/>
<path class="st1" d="M629.2,1253.2c0-6,0-10.6,0-15.3c0-75.7,0-151.3,0-227c0-2,0-4,0-6c-0.1-9.2-0.3-8.7,8.7-8.6
c15.1,0,15.2,0,15,15.6c-0.7,62.7-3.8,125.2-10.3,187.6c-1.5,14.9-3.9,29.7-5.5,44.6C636.6,1248.7,634.5,1251.1,629.2,1253.2z"/>
<path class="st1" d="M2060,697.5c0-15.9,0-29.5,0-43c0-3.7,0.3-7.3-0.1-11c-2-19,6.4-34.8,15.3-50.5c1-1.7,2.3-3.2,3.6-4.7
c5.4-6.2,8.1-6.2,13.4,0.4c6.3,7.8,10.3,17.1,13.7,26.4c3.2,8.7,5.9,17.6,8.7,26.5c1.3,4,1,7.1-2.4,10.7
C2096.7,668.2,2079,681.6,2060,697.5z"/>
<path class="st1" d="M613.2,1283.1c0,24.5,1.5,47.6-0.5,70.3c-1.4,15.4-9,29.8-19.7,41.7c-3.8,4.2-5.7,3.9-10-0.4
c-4.3-4.3-7.2-9.6-9.8-15c-6.7-13.8-11.2-28.4-15.3-43.2c-0.8-3-1.3-5.6,1.5-8.1C576.3,1313,592.4,1296.7,613.2,1283.1z"/>
<path class="st1" d="M2060,1284.4c19.3,15.5,36.8,29.3,52.5,45.2c2.6,2.6,3.7,5.1,2.6,8.8c-5.1,17.2-10,34.5-19.7,50
c-7.7,12.2-12.4,12.7-19.8,0.6c-9.5-15.4-16.9-31.4-15.8-50.5C2060.7,1321.2,2060,1303.9,2060,1284.4z"/>
<path class="st1" d="M611.9,698.2c-8.5-6.7-17.1-12.8-25-19.7c-9.1-7.8-17.5-16.4-26.3-24.5c-2.3-2.2-4-4.1-3-7.7
c5.3-18.2,10.3-36.5,20.4-52.8c7.8-12.7,12.8-13,20.4-0.3c8.9,14.9,16.3,30.2,15.1,48.6c-1,15.9-0.2,31.9-0.3,47.9
C613.2,692.2,614,694.9,611.9,698.2z"/>
<path class="st1" d="M1788.2,864c4.1,27,4.9,53.5,6.1,80.1c0.5,11.3,0.2,22.6,0.7,33.9c0.2,5.2-1.6,7.4-6.7,7.3
c-4.7-0.1-9.3-0.3-14,0c-4.6,0.3-5.5-2-5.4-6c0.2-26.3,1.3-52.5,3.2-78.8c0.5-7.6,1.2-15.2,2.1-22.8
C1775.6,867.2,1775.7,867.2,1788.2,864z"/>
<path class="st1" d="M885.3,1118.4c-5.1-39.1-6.6-77.6-6.6-116.3c0-4.2,1.3-5.9,5.7-5.8c25,0.7,20.1-4.1,19.8,19.4
c-0.4,29.6-1.5,59.3-5,88.8C897.9,1115.4,898.1,1115.4,885.3,1118.4z"/>
<path class="st1" d="M885.7,863.5c12.4,2.1,12.2,2.1,13.5,13.8c3.8,33,4.6,66.2,5.3,99.4c0.2,8.3,0,8.5-7.9,8.4
c-5.6-0.1-12.6,2.4-16.3-1.2c-3.8-3.6-1.3-10.7-1.3-16.2c0.1-33.9,1.8-67.7,6.1-101.4C885.1,865.4,885.4,864.5,885.7,863.5z"/>
<path class="st1" d="M1788.4,1117.5c-4,0.3-6.2-1.5-8.7-2.2c-3.2-0.9-4.6-2.9-4.9-6.5c-0.7-10.3-2.2-20.5-2.9-30.8
c-1.5-24.6-2.8-49.2-2.8-73.9c0-7.5,0.1-7.8,7.7-7.6c5.9,0.2,13.3-2.7,17.2,1.6c3.4,3.7,1,10.9,0.9,16.5
C1794.3,1048.8,1792.8,1083.1,1788.4,1117.5z"/>
<path class="st1" d="M1481.1,1078.1c-10.7,0.3-20.5-1.5-30.3-2.6c-34.1-3.7-68.2-5.9-102.5-5.6c-1.3,0-2.7-0.1-4,0
c-4.9,0.5-5.1-2.5-5.2-6.2c-0.1-4,0.7-6,5.4-6c20,0.1,39.9-0.2,59.9,1c22.9,1.4,45.8,3,68.7,4.7c7.9,0.6,7.8,0.9,8.2,8.7
C1481.2,1073.7,1481.1,1075.4,1481.1,1078.1z"/>
<path class="st1" d="M1480,903c2,3.3,0.7,6.3,1,9.2c0.4,4.3-1.6,5.6-5.8,5.9c-42.2,3.3-84.3,6.2-126.7,5.9c-1.3,0-2.7-0.1-4,0
c-4.5,0.5-5.5-1.7-5.6-5.8c-0.1-4.5,0.9-6.6,6-6.5c29.7,0.7,59.2-1.5,88.8-3.9C1449.3,906.5,1464.7,904.6,1480,903z"/>
<path class="st1" d="M1192.2,1078.2c1-14.7,0.9-15,13.5-15.8c40.5-2.9,80.9-5.4,121.6-4.6c7.3,0.1,7.4,0.5,7.3,7
c-0.1,4-1.7,5.2-5.4,5.1c-6-0.2-12-0.1-18,0c-37.3,0.2-74.3,3.2-111.3,7.8C1197.6,1078,1195.3,1078,1192.2,1078.2z"/>
<path class="st1" d="M1192.6,903.2c17.2,2.2,33.7,3.8,50.2,5c28.2,2.1,56.4,4.4,84.7,3.6c3.8-0.1,7.3-0.4,7.1,5.4
c-0.3,6.3-0.2,6.8-6.6,6.8c-18.6,0-37.2,0.4-55.9-0.8c-19.2-1.2-38.5-2.1-57.8-3.2c-3-0.2-6-0.4-8.9-0.8
C1192.8,917.6,1192.8,917.6,1192.6,903.2z"/>
<path class="st1" d="M653.6,694.6c2.5,2.7,3.4,3.5,4.2,4.6c15.1,20.9,32.8,39.4,51.8,56.7c2.5,2.3,7.7,4.8,5,8.5
c-2.7,3.9-6.2-1-8.9-2.4c-14.7-7.8-28.1-17.9-42.2-26.6c-3.1-1.9-4.3-4.4-4.8-8C657.4,717,655.5,706.9,653.6,694.6z"/>
<path class="st1" d="M654,1284.6c0.9-5.5,2.7-11,2.6-16.4c-0.1-14.4,6.3-23.8,19.2-30.2c10.6-5.3,20.3-12.5,30.5-18.8
c2.5-1.5,4.9-3.3,7.8-3.6c1.6,2.1,1.5,4,0.2,5.7c-1,1.3-2.4,2.3-3.6,3.4c-18.4,17.3-36.5,34.8-51.2,55.5c-1.1,1.6-2.6,2.9-4,4.3
C655.3,1284.8,654.8,1284.6,654,1284.6z"/>
<path class="st1" d="M2019.1,698.8c-1.4,9.8-3,19.6-4.2,29.5c-0.4,3-2.2,4.7-4.4,6.2c-15.3,9.9-30.7,19.7-46.1,29.5
c-1.5,1-3.7,4-5.7,0.4c-1.5-2.6,0.2-4.4,2.1-5.9c11.9-9.2,21.5-20.7,32.4-30.8c8.4-7.7,14.5-17.2,21.4-26.1c0.9-1.2,2-2.4,3-3.6
C2018.1,698.2,2018.6,698.5,2019.1,698.8z"/>
<path class="st1" d="M2017.1,1283.9c-11.8-19.6-28.7-34.6-44.6-50.5c-3.7-3.7-7.8-7.2-11.8-10.6c-1.9-1.7-3.4-3.4-1.7-5.9
c1.7-2.4,3.5-0.5,5,0.5c15.7,10,31.3,20,46.9,30c2,1.3,3.5,2.9,3.9,5.4c1.4,10.1,2.9,20.2,4.4,30.4
C2018.5,1283.4,2017.8,1283.7,2017.1,1283.9z"/>
<path class="st1" d="M1044.6,1034c-0.6,11-1.1,21.9-1.7,32.9c-0.1,1.6-0.3,3.3-0.8,4.9c-0.3,0.8-1.4,1.6-2.2,1.7
c-0.9,0.1-1.9-0.5-2.1-1.7c-0.1-1-0.3-2-0.4-3c-1.2-22.2-2.3-44.5-2.4-66.8c0-3.2,0-6,4.6-5.9c3.5,0.1,5.6,0.7,5.5,4.9
c-0.2,11-0.1,21.9-0.1,32.9C1044.9,1034,1044.8,1034,1044.6,1034z"/>
<path class="st1" d="M1636.4,1033.8c0,10.6,0,21.2,0,31.8c0,2-0.3,4-0.7,5.9c-0.2,0.8-1.2,1.8-1.9,1.8c-0.8,0-1.9-0.8-2.2-1.5
c-0.5-1.2-0.5-2.6-0.6-3.9c-1.9-21.8-2.2-43.7-2.1-65.6c0-5.7,1.3-6.7,6-6.1c3.9,0.5,3.6,3.1,3.6,5.8c0,10.6,0,21.2,0,31.8
C1637.8,1033.8,1637.1,1033.8,1636.4,1033.8z"/>
<path class="st1" d="M1638.5,948.1c0,10.3,0,20.6,0,31c0,2.6,0.5,5.4-3.3,6.2c-4.4,0.9-6.4-0.5-6.4-5.8c0-19,0-38,1.4-56.9
c0.2-3,0.4-6,0.7-9c0.2-2.2-0.3-5.4,3-5.2c2.5,0.1,2.2,3,2.3,4.9c0.6,11.6,1.1,23.3,1.6,34.9C1638,948.1,1638.2,948.1,1638.5,948.1
z"/>
<path class="st1" d="M1045.1,948c0,10.6-0.1,21.2,0,31.9c0,3.6-0.9,5.5-5,5.5c-3.9-0.1-5.1-1.7-5.1-5.4c0.1-22.6,1.2-45.1,2.5-67.7
c0.1-1.6-0.1-3.7,2-4c2.8-0.4,3.1,1.9,3.2,3.9c0.7,11.9,1.3,23.9,1.9,35.8C1044.8,948,1044.9,948,1045.1,948z"/>
<path class="st1" d="M741.9,740.1c7,10.8,9.7,22,12.6,34.5c-8.1-3.6-13.5-8.9-19.7-13c-2-1.3-2-3.3-1.2-5.3
C735.6,751.1,737,745.6,741.9,740.1z"/>
<path class="st1" d="M1931.7,1241.3c-6.9-10.6-9.7-21.7-12.1-33.3c4.2,0.4,6.4,3.1,9.1,4.9
C1943.2,1223.1,1943.4,1224.7,1931.7,1241.3z"/>
<path class="st1" d="M1931.6,740c12,17.2,11.9,18.3-3.6,29.1c-2.4,1.7-4.5,3.9-8.5,4.8C1921.8,761.9,1925,750.9,1931.6,740z"/>
<path class="st1" d="M742.1,1241.3c-5.3-5.2-6.5-11.1-8.6-16.5c-0.8-2-0.5-3.8,1.6-5.1c6-3.9,11.2-9.1,19.2-12.6
C751.8,1219.5,748.7,1230.3,742.1,1241.3z"/>
<path class="st1" d="M599.6,517.5c0.6,0.1,1.4,0.1,1.9,0.4c12.6,7.1,13,7.9,10.7,24.4c-5.9-7.9-11-15.2-13.4-24.1
C599,518,599.3,517.7,599.6,517.5z"/>
<path class="st1" d="M597.6,1464.4c4-9.4,8.1-17.2,14-24.4C614.9,1455.5,614.5,1457.9,597.6,1464.4z"/>
<path class="st1" d="M2072.4,516.4c2.2,0.6,2.3,2,1.3,3.7c-3.8,6.1-7.7,12.1-11.9,18.6c-2.8-10.6-2.1-12.7,5.3-17.8
c1.9-1.3,4.2-2,5.3-4.3L2072.4,516.4z"/>
<path class="st1" d="M2061.5,1442.5c4.7,8.5,10.5,14.5,12.5,22.8C2067.2,1459.8,2056.4,1456.8,2061.5,1442.5z"/>
<path class="st1" d="M1781.6,836.9c4.2,9.3,4.2,9.3-1.8,12.6C1777.9,845.4,1780.6,842.1,1781.6,836.9z"/>
<path class="st1" d="M895.4,849.3c-7.5-2.9-7.5-2.9-3.8-11.1C893.8,841.3,893.9,844.7,895.4,849.3z"/>
<path class="st1" d="M630.1,1284.6c-0.2-5.4-2.7-11,1.8-17C634.2,1274.4,629.7,1279.3,630.1,1284.6z"/>
<path class="st1" d="M629.9,697.1c0.5,5.4,3.9,10.3,2.4,16.6C626.9,708.5,630.1,702.5,629.9,697.1z"/>
<path class="st1" d="M2044.5,1284.5c-2.2-5-4.3-9.2-2.8-15C2046.1,1274.3,2043.5,1279.4,2044.5,1284.5z"/>
<path class="st1" d="M2041.9,712.1c-2.2-6,1.1-10.1,2.2-14.6C2043.5,702.2,2046,707.4,2041.9,712.1z"/>
<path class="st1" d="M892.2,1143.9c-4.6-7.9-4.6-7.9,1.6-11.5C895.7,1136.5,892.6,1139.7,892.2,1143.9z"/>
<path class="st1" d="M1779.6,1132.2c6.1,3.1,6.1,3.1,2.7,11.1c-1.9-1-1.7-3.1-2.3-4.6C1779.2,1137,1778.6,1135.1,1779.6,1132.2z"/>
<path class="st1" d="M1934.8,1202.5c4.1-0.6,6.5,2.5,11.4,4.9c-6.3,0.5-8.2-3.5-11.3-5.1L1934.8,1202.5z"/>
<path class="st1" d="M1935.8,777c3.1,0.9,4.5-3.5,7.8-2.9c-1.8,4-4.7,3.9-7.9,2.8L1935.8,777z"/>
<path class="st1" d="M729.5,774c2.7-0.5,4.4,0.6,5.8,3.2C732.1,778,731,775.7,729.5,774z"/>
<path class="st1" d="M734,1202.5c2.3,4.6-1.2,4.8-4.2,5.9c-0.6-3.7,3.7-3.3,4.2-5.7C734,1202.7,734,1202.5,734,1202.5z"/>
<path class="st1" d="M1935.2,780.3c-0.4,0.2-0.8,0.5-1.2,0.7c0.3,0,0.6,0.1,0.8,0c0.2-0.1,0.4-0.3,0.6-0.5
C1935.5,780.5,1935.2,780.3,1935.2,780.3z"/>
<path class="st1" d="M740,1203.1c-1.9-0.8-4.2,2.9-6-0.5c0,0,0,0.2,0,0.1C736,1202.8,738,1202.9,740,1203.1L740,1203.1z"/>
<path class="st1" d="M1935.7,777c1.4,1.3,0.9,2.4-0.5,3.3c0,0,0.3,0.3,0.3,0.3c0.1-1.2,0.3-2.3,0.4-3.5
C1935.8,777,1935.7,777,1935.7,777z"/>
<path class="st1" d="M737.9,778.4c0.2,0.2,0.4,0.5,0.6,0.7c-0.2,0.2-0.5,0.5-0.7,0.5c-0.2,0-0.4-0.4-0.6-0.6
C737.4,778.8,737.7,778.6,737.9,778.4z"/>
<path class="st1" d="M740,1203.1c0-0.3,0-0.8-0.2-1c-0.8-0.8-0.7-1.1,0.4-0.8C739.5,1201.9,739.4,1202.4,740,1203.1
C740,1203.1,740,1203.1,740,1203.1z"/>
<path class="st1" d="M1934.9,1202.3c0-0.2-0.1-0.3-0.1-0.5c0,0-0.1,0.1-0.1,0.1c0.1,0.2,0.1,0.4,0.2,0.6
C1934.8,1202.5,1934.9,1202.3,1934.9,1202.3z"/>
<path class="st1" d="M2072.4,516.6c0.7,0.4,1.4,0.8,2.1,1.2c-0.2,0.2-0.4,0.7-0.6,0.7c-1.1-0.2-1.5-1-1.4-2
C2072.4,516.4,2072.4,516.6,2072.4,516.6z"/>
<path class="st1" d="M598.7,518.3c-1-0.2-1.8-0.8-1-1.6c0.8-0.9,1.4,0,1.8,0.8C599.3,517.7,599,518,598.7,518.3z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 48 KiB

105
docs/conf.py 100644
View File

@ -0,0 +1,105 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# Warn about all references to unknown targets
nitpicky = True
# The master toctree document.
master_doc = 'index'
# -- Project information -----------------------------------------------------
project = 'tractor'
copyright = '2018, Tyler Goodlet'
author = 'Tyler Goodlet'
# The full version, including alpha/beta/rc tags
release = '0.0.0a0.dev0'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_book_theme'
pygments_style = 'algol_nu'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
# 'logo': 'tractor_logo_side.svg',
# 'description': 'Structured concurrent "actors"',
"repository_url": "https://github.com/goodboy/tractor",
"use_repository_button": True,
"home_page_in_toc": False,
"show_toc_level": 1,
"path_to_docs": "docs",
}
html_sidebars = {
"**": [
"sbt-sidebar-nav.html",
# "sidebar-search-bs.html",
# 'localtoc.html',
],
# 'logo.html',
# 'github.html',
# 'relations.html',
# 'searchbox.html'
# ]
}
# doesn't seem to work?
# extra_navbar = "<p>nextttt-gennnnn</p>"
html_title = ''
html_logo = '_static/tractor_logo_side.svg'
html_favicon = '_static/tractor_logo_side.svg'
# show_navbar_depth = 1
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
"pytest": ("https://docs.pytest.org/en/latest", None),
"setuptools": ("https://setuptools.readthedocs.io/en/latest", None),
}

51
docs/dev_tips.rst 100644
View File

@ -0,0 +1,51 @@
Hot tips for ``tractor`` hackers
================================
This is a WIP guide for newcomers to the project mostly to do with
dev, testing, CI and release gotchas, reminders and best practises.
``tractor`` is a fairly novel project compared to most since it is
effectively a new way of doing distributed computing in Python and is
much closer to working with an "application level runtime" (like erlang
OTP or scala's akka project) then it is a traditional Python library.
As such, having an arsenal of tools and recipes for figuring out the
right way to debug problems when they do arise is somewhat of
a necessity.
Making a Release
----------------
We currently do nothing special here except the traditional
PyPa release recipe as in `documented by twine`_. I personally
create sub-dirs within the generated `dist/` with an explicit
release name such as `alpha3/` when there's been a sequence of
releases I've made, but it really is up to you how you like to
organize generated sdists locally.
The resulting build cmds are approximately:
.. code:: bash
python setup.py sdist -d ./dist/XXX.X/
twine upload -r testpypi dist/XXX.X/*
twine upload dist/XXX.X/*
.. _documented by twine: https://twine.readthedocs.io/en/latest/#using-twine
Debugging and monitoring actor trees
------------------------------------
TODO: but there are tips in the readme for some terminal commands
which can be used to see the process trees easily on Linux.
Using the log system to trace `trio` task flow
----------------------------------------------
TODO: the logging system is meant to be oriented around
stack "layers" of the runtime such that you can track
"logical abstraction layers" in the code such as errors, cancellation,
IPC and streaming, and the low level transport and wire protocols.

View File

@ -0,0 +1,109 @@
tractor
=======
The Python async-native multi-core system *you always wanted*.
|gh_actions|
|docs|
.. _actor model: https://en.wikipedia.org/wiki/Actor_model
.. _trio: https://github.com/python-trio/trio
.. _multi-processing: https://en.wikipedia.org/wiki/Multiprocessing
.. _trionic: https://trio.readthedocs.io/en/latest/design.html#high-level-design-principles
.. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich
.. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
``tractor`` is a `structured concurrent`_ "`actor model`_" built on trio_ and multi-processing_.
It is an attempt to pair trionic_ `structured concurrency`_ with
distributed Python. You can think of it as a ``trio``
*-across-processes* or simply as an opinionated replacement for the
stdlib's ``multiprocessing`` but built on async programming primitives
from the ground up.
Don't be scared off by this description. ``tractor`` **is just ``trio``**
but with nurseries for process management and cancel-able IPC.
If you understand how to work with ``trio``, ``tractor`` will give you
the parallelism you've been missing.
``tractor``'s nurseries let you spawn ``trio`` *"actors"*: new Python
processes which each run a ``trio`` scheduled task tree (also known as
an `async sandwich`_ - a call to ``trio.run()``). That is, each
"*Actor*" is a new process plus a ``trio`` runtime.
"Actors" communicate by exchanging asynchronous messages_ and avoid
sharing state. The intention of this model is to allow for highly
distributed software that, through the adherence to *structured
concurrency*, results in systems which fail in predictable and
recoverable ways.
The first step to grok ``tractor`` is to get the basics of ``trio`` down.
A great place to start is the `trio docs`_ and this `blog post`_.
.. _messages: https://en.wikipedia.org/wiki/Message_passing
.. _trio docs: https://trio.readthedocs.io/en/latest/
.. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _3 axioms: https://en.wikipedia.org/wiki/Actor_model#Fundamental_concepts
.. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony
.. _async generators: https://www.python.org/dev/peps/pep-0525/
Install
-------
No PyPi release yet!
::
pip install git+git://github.com/goodboy/tractor.git
Alluring Features
-----------------
- **It's just** ``trio``, but with SC applied to processes (aka "actors")
- Infinitely nesteable process trees
- Built-in API for inter-process streaming
- A (first ever?) "native" multi-core debugger for Python using `pdb++`_
- (Soon to land) ``asyncio`` support allowing for "infected" actors where
`trio` drives the `asyncio` scheduler via the astounding "`guest mode`_"
Example: self-destruct a process tree
-------------------------------------
.. literalinclude:: ../../examples/parallelism/we_are_processes.py
:language: python
The example you're probably after...
------------------------------------
It seems the initial query from most new users is "how do I make a worker
pool thing?".
``tractor`` is built to handle any SC process tree you can
imagine; the "worker pool" pattern is a trivial special case:
.. literalinclude:: ../../examples/parallelism/concurrent_actors_primes.py
:language: python
Feel like saying hi?
--------------------
This project is very much coupled to the ongoing development of
``trio`` (i.e. ``tractor`` gets most of its ideas from that brilliant
community). If you want to help, have suggestions or just want to
say hi, please feel free to reach us in our `matrix channel`_. If
matrix seems too hip, we're also mostly all in the the `trio gitter
channel`_!
.. _trio gitter channel: https://gitter.im/python-trio/general
.. _matrix channel: https://matrix.to/#/!tractor:matrix.org
.. _pdb++: https://github.com/pdbpp/pdbpp
.. _guest mode: https://trio.readthedocs.io/en/stable/reference-lowlevel.html?highlight=guest%20mode#using-guest-mode-to-run-trio-on-top-of-other-event-loops
.. |gh_actions| image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fgoodboy%2Ftractor%2Fbadge&style=popout-square
:target: https://actions-badge.atrox.dev/goodboy/tractor/goto
.. |docs| image:: https://readthedocs.org/projects/tractor/badge/?version=latest
:target: https://tractor.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status

View File

@ -0,0 +1,51 @@
# Configuration file for the Sphinx documentation builder.
# this config is for the rst generation extension and thus
# requires only basic settings:
# https://github.com/sphinx-contrib/restbuilder
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# Warn about all references to unknown targets
nitpicky = True
# The master toctree document.
master_doc = '_sphinx_readme'
# -- Project information -----------------------------------------------------
project = 'tractor'
copyright = '2018, Tyler Goodlet'
author = 'Tyler Goodlet'
# The full version, including alpha/beta/rc tags
release = '0.0.0a0.dev0'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinxcontrib.restbuilder',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']

View File

@ -1,68 +1,43 @@
tractor .. tractor documentation master file, created by
======= sphinx-quickstart on Sun Feb 9 22:26:51 2020.
An async-native `actor model`_ built on trio_ and multiprocessing_. You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
``tractor``
===========
|travis| A `structured concurrent`_, async-native "`actor model`_" built on trio_ and multiprocessing_.
.. |travis| image:: https://img.shields.io/travis/tgoodlet/tractor/master.svg .. toctree::
:target: https://travis-ci.org/tgoodlet/tractor :maxdepth: 1
:caption: Contents:
.. _actor model: https://en.wikipedia.org/wiki/Actor_model .. _actor model: https://en.wikipedia.org/wiki/Actor_model
.. _trio: https://github.com/python-trio/trio .. _trio: https://github.com/python-trio/trio
.. _multiprocessing: https://docs.python.org/3/library/multiprocessing.html .. _multiprocessing: https://en.wikipedia.org/wiki/Multiprocessing
.. _trionic: https://trio.readthedocs.io/en/latest/design.html#high-level-design-principles .. _trionic: https://trio.readthedocs.io/en/latest/design.html#high-level-design-principles
.. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich .. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich
.. _always propagate: https://trio.readthedocs.io/en/latest/design.html#exceptions-always-propagate .. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
.. _causality: https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/#c-c-c-c-causality-breaker
.. _shared nothing architecture: https://en.wikipedia.org/wiki/Shared-nothing_architecture
.. _cancellation: https://trio.readthedocs.io/en/latest/reference-core.html#cancellation-and-timeouts
.. _channels: https://en.wikipedia.org/wiki/Channel_(programming)
.. _chaos engineering: http://principlesofchaos.org/
``tractor`` is an attempt to bring trionic_ `structured concurrency`_ to distributed multi-core Python. ``tractor`` is an attempt to bring trionic_ `structured concurrency`_ to
distributed multi-core Python; it aims to be the Python multi-processing
framework *you always wanted*.
``tractor`` lets you run and spawn *actors*: processes which each run a ``trio`` ``tractor`` lets you spawn ``trio`` *"actors"*: processes which each run
scheduler and task tree (also known as an `async sandwich`_). a ``trio`` scheduled task tree (also known as an `async sandwich`_).
*Actors* communicate by sending messages_ over channels_ and avoid sharing any local state. *Actors* communicate by exchanging asynchronous messages_ and avoid
This `actor model`_ allows for highly distributed software architecture which works just as sharing any state. This model allows for highly distributed software
well on multiple cores as it does over many hosts. architecture which works just as well on multiple cores as it does over
``tractor`` takes much inspiration from pulsar_ and execnet_ but attempts to be much more many hosts.
focussed on sophistication of the lower level distributed architecture as well as have first
class support for `modern async Python`_.
The first step to grok ``tractor`` is to get the basics of ``trio`` The first step to grok ``tractor`` is to get the basics of ``trio`` down.
down. A great place to start is the `trio docs`_ and this `blog post`_. A great place to start is the `trio docs`_ and this `blog post`_.
.. _messages: https://en.wikipedia.org/wiki/Message_passing .. _messages: https://en.wikipedia.org/wiki/Message_passing
.. _trio docs: https://trio.readthedocs.io/en/latest/ .. _trio docs: https://trio.readthedocs.io/en/latest/
.. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ .. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ .. _structured concurrency: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _modern async Python: https://www.python.org/dev/peps/pep-0525/
.. contents::
Philosophy
----------
``tractor``'s tenets non-comprehensively include:
- no spawning of processes *willy-nilly*; causality_ is paramount!
- `shared nothing architecture`_
- remote errors `always propagate`_ back to the caller
- verbatim support for ``trio``'s cancellation_ system
- no use of *proxy* objects to wrap RPC calls
- an immersive debugging experience
- anti-fragility through `chaos engineering`_
.. warning:: ``tractor`` is in alpha-alpha and is expected to change rapidly!
Expect nothing to be set in stone. Your ideas about where it should go
are greatly appreciated!
.. _pulsar: http://quantmind.github.io/pulsar/design.html
.. _execnet: https://codespeak.net/execnet/
Install Install
@ -71,11 +46,69 @@ No PyPi release yet!
:: ::
pip install git+git://github.com/tgoodlet/tractor.git pip install git+git://github.com/goodboy/tractor.git
Feel like saying hi?
--------------------
This project is very much coupled to the ongoing development of
``trio`` (i.e. ``tractor`` gets all its ideas from that brilliant
community). If you want to help, have suggestions or just want to
say hi, please feel free to ping me on the `trio gitter channel`_!
.. _trio gitter channel: https://gitter.im/python-trio/general
Philosophy
----------
Our tenets non-comprehensively include:
- strict adherence to the `concept-in-progress`_ of *structured concurrency*
- no spawning of processes *willy-nilly*; causality_ is paramount!
- (remote) errors `always propagate`_ back to the parent supervisor
- verbatim support for ``trio``'s cancellation_ system
- `shared nothing architecture`_
- no use of *proxy* objects or shared references between processes
- an immersive debugging experience
- anti-fragility through `chaos engineering`_
``tractor`` is an actor-model-*like* system in the sense that it adheres
to the `3 axioms`_ but does not (yet) fulfil all "unrequirements_" in
practise. It is an experiment in applying `structured concurrency`_
constraints on a parallel processing system where multiple Python
processes exist over many hosts but no process can outlive its parent.
In `erlang` parlance, it is an architecture where every process has
a mandatory supervisor enforced by the type system. The API design is
almost exclusively inspired by trio_'s concepts and primitives (though
we often lag a little). As a distributed computing system `tractor`
attempts to place sophistication at the correct layer such that
concurrency primitives are powerful yet simple, making it easy to build
complex systems (you can build a "worker pool" architecture but it's
definitely not required). There is first class support for inter-actor
streaming using `async generators`_ and ongoing work toward a functional
reactive style for IPC.
.. warning:: ``tractor`` is in alpha-alpha and is expected to change rapidly!
Expect nothing to be set in stone. Your ideas about where it should go
are greatly appreciated!
.. _concept-in-progress: https://trio.discourse.group/t/structured-concurrency-kickoff/55
.. _3 axioms: https://en.wikipedia.org/wiki/Actor_model#Fundamental_concepts
.. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony
.. _async generators: https://www.python.org/dev/peps/pep-0525/
.. _always propagate: https://trio.readthedocs.io/en/latest/design.html#exceptions-always-propagate
.. _causality: https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/#c-c-c-c-causality-breaker
.. _shared nothing architecture: https://en.wikipedia.org/wiki/Shared-nothing_architecture
.. _cancellation: https://trio.readthedocs.io/en/latest/reference-core.html#cancellation-and-timeouts
.. _channels: https://en.wikipedia.org/wiki/Channel_(programming)
.. _chaos engineering: http://principlesofchaos.org/
Examples Examples
-------- --------
Note, if you are on Windows please be sure to see the :ref:`gotchas
<windowsgotchas>` section before trying these.
A trynamic first scene A trynamic first scene
@ -83,49 +116,7 @@ A trynamic first scene
Let's direct a couple *actors* and have them run their lines for Let's direct a couple *actors* and have them run their lines for
the hip new film we're shooting: the hip new film we're shooting:
.. code:: python .. literalinclude:: ../examples/a_trynamic_first_scene.py
import tractor
from functools import partial
_this_module = __name__
the_line = 'Hi my name is {}'
async def hi():
return the_line.format(tractor.current_actor().name)
async def say_hello(other_actor):
async with tractor.wait_for_actor(other_actor) as portal:
return await portal.run(_this_module, 'hi')
async def main():
"""Main tractor entry point, the "master" process (for now
acts as the "director").
"""
async with tractor.open_nursery() as n:
print("Alright... Action!")
donny = await n.run_in_actor(
'donny',
say_hello,
# arguments are always named
other_actor='gretchen',
)
gretchen = await n.run_in_actor(
'gretchen',
say_hello,
other_actor='donny',
)
print(await gretchen.result())
print(await donny.result())
print("CUTTTT CUUTT CUT!!! Donny!! You're supposed to say...")
tractor.run(main)
We spawn two *actors*, *donny* and *gretchen*. We spawn two *actors*, *donny* and *gretchen*.
Each actor starts up and executes their *main task* defined by an Each actor starts up and executes their *main task* defined by an
@ -141,7 +132,7 @@ Actor spawning and causality
``tractor`` tries to take ``trio``'s concept of causal task lifetimes ``tractor`` tries to take ``trio``'s concept of causal task lifetimes
to multi-process land. Accordingly, ``tractor``'s *actor nursery* behaves to multi-process land. Accordingly, ``tractor``'s *actor nursery* behaves
similar to ``trio``'s nursery_. That is, ``tractor.open_nursery()`` similar to ``trio``'s nursery_. That is, ``tractor.open_nursery()``
opens an ``ActorNursery`` which waits on spawned *actors* to complete opens an ``ActorNursery`` which **must** wait on spawned *actors* to complete
(or error) in the same causal_ way ``trio`` waits on spawned subtasks. (or error) in the same causal_ way ``trio`` waits on spawned subtasks.
This includes errors from any one actor causing all other actors This includes errors from any one actor causing all other actors
spawned by the same nursery to be cancelled_. spawned by the same nursery to be cancelled_.
@ -149,34 +140,11 @@ spawned by the same nursery to be cancelled_.
To spawn an actor and run a function in it, open a *nursery block* To spawn an actor and run a function in it, open a *nursery block*
and use the ``run_in_actor()`` method: and use the ``run_in_actor()`` method:
.. code:: python .. literalinclude:: ../examples/actor_spawning_and_causality.py
import tractor
def cellar_door():
return "Dang that's beautiful"
async def main():
"""The main ``tractor`` routine.
"""
async with tractor.open_nursery() as n:
portal = await n.run_in_actor('teacher', cellar_door)
# The ``async with`` will unblock here since the 'frank'
# actor has completed its main task ``movie_theatre_question()``.
print(await portal.result())
tractor.run(main)
What's going on? What's going on?
- an initial *actor* is started with ``tractor.run()`` and told to execute - an initial *actor* is started with ``trio.run()`` and told to execute
its main task_: ``main()`` its main task_: ``main()``
- inside ``main()`` an actor is *spawned* using an ``ActorNusery`` and is told - inside ``main()`` an actor is *spawned* using an ``ActorNusery`` and is told
@ -186,11 +154,11 @@ What's going on?
returned from ``nursery.run_in_actor()`` is used to communicate with returned from ``nursery.run_in_actor()`` is used to communicate with
the newly spawned *sub-actor* the newly spawned *sub-actor*
- the second actor, *frank*, in a new *process* running a new ``trio`` task_ - the second actor, *some_linguist*, in a new *process* running a new ``trio`` task_
then executes ``cellar_door()`` and returns its result over a *channel* back then executes ``cellar_door()`` and returns its result over a *channel* back
to the parent actor to the parent actor
- the parent actor retrieves the subactor's (*frank*) *final result* using ``portal.result()`` - the parent actor retrieves the subactor's *final result* using ``portal.result()``
much like you'd expect from a future_. much like you'd expect from a future_.
This ``run_in_actor()`` API should look very familiar to users of This ``run_in_actor()`` API should look very familiar to users of
@ -209,39 +177,11 @@ method:
method and act like an RPC daemon that runs indefinitely (the method and act like an RPC daemon that runs indefinitely (the
``with tractor.open_nursery()`` won't exit) until cancelled_ ``with tractor.open_nursery()`` won't exit) until cancelled_
Had we wanted the latter form in our example it would have looked like: Here is a similar example using the latter method:
.. code:: python .. literalinclude:: ../examples/actor_spawning_and_causality_with_daemon.py
def movie_theatre_question(): The ``enable_modules`` `kwarg` above is a list of module path
"""A question asked in a dark theatre, in a tangent
(errr, I mean different) process.
"""
return 'have you ever seen a portal?'
async def main():
"""The main ``tractor`` routine.
"""
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'frank',
# enable the actor to run funcs from this current module
rpc_module_paths=[__name__],
)
print(await portal.run(__name__, 'movie_theatre_question'))
# call the subactor a 2nd time
print(await portal.run(__name__, 'movie_theatre_question'))
# the async with will block here indefinitely waiting
# for our actor "frank" to complete, but since it's an
# "outlive_main" actor it will never end until cancelled
await portal.cancel_actor()
The ``rpc_module_paths`` `kwarg` above is a list of module path
strings that will be loaded and made accessible for execution in the strings that will be loaded and made accessible for execution in the
remote actor through a call to ``Portal.run()``. For now this is remote actor through a call to ``Portal.run()``. For now this is
a simple mechanism to restrict the functionality of the remote a simple mechanism to restrict the functionality of the remote
@ -263,8 +203,35 @@ to all others with ease over standard network protocols).
.. _Executor: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor .. _Executor: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor
Async IPC using *portals* Cancellation
************************* ************
``tractor`` supports ``trio``'s cancellation_ system verbatim.
Cancelling a nursery block cancels all actors spawned by it.
Eventually ``tractor`` plans to support different `supervision strategies`_ like ``erlang``.
.. _supervision strategies: http://erlang.org/doc/man/supervisor.html#sup_flags
Remote error propagation
************************
Any task invoked in a remote actor should ship any error(s) back to the calling
actor where it is raised and expected to be dealt with. This way remote actors
are never cancelled unless explicitly asked or there's a bug in ``tractor`` itself.
.. literalinclude:: ../examples/remote_error_propagation.py
You'll notice the nursery cancellation conducts a *one-cancels-all*
supervisory strategy `exactly like trio`_. The plan is to add more
`erlang strategies`_ in the near future by allowing nurseries to accept
a ``Supervisor`` type.
.. _exactly like trio: https://trio.readthedocs.io/en/latest/reference-core.html#cancellation-semantics
.. _erlang strategies: http://learnyousomeerlang.com/supervisors
IPC using *portals*
*******************
``tractor`` introduces the concept of a *portal* which is an API ``tractor`` introduces the concept of a *portal* which is an API
borrowed_ from ``trio``. A portal may seem similar to the idea of borrowed_ from ``trio``. A portal may seem similar to the idea of
a RPC future_ except a *portal* allows invoking remote *async* functions and a RPC future_ except a *portal* allows invoking remote *async* functions and
@ -282,57 +249,117 @@ Depending on the function type ``Portal.run()`` tries to
correctly interface exactly like a local version of the remote correctly interface exactly like a local version of the remote
built-in Python *function type*. Currently async functions, generators, built-in Python *function type*. Currently async functions, generators,
and regular functions are supported. Inspiration for this API comes and regular functions are supported. Inspiration for this API comes
from the way execnet_ does `remote function execution`_ but without `remote function execution`_ but without the client code being
the client code (necessarily) having to worry about the underlying concerned about the underlying channels_ system or shipping code
channels_ system or shipping code over the network. over the network.
This *portal* approach turns out to be paricularly exciting with the This *portal* approach turns out to be paricularly exciting with the
introduction of `asynchronous generators`_ in Python 3.6! It means that introduction of `asynchronous generators`_ in Python 3.6! It means that
actors can compose nicely in a data processing pipeline. actors can compose nicely in a data streaming pipeline.
As an example here's an actor that streams for 1 second from a remote async .. _exactly like trio: https://trio.readthedocs.io/en/latest/reference-core.html#cancellation-semantics
generator function running in a separate actor:
Streaming
*********
By now you've figured out that ``tractor`` lets you spawn process based
*actors* that can invoke cross-process (async) functions and all with
structured concurrency built in. But the **real cool stuff** is the
native support for cross-process *streaming*.
Asynchronous generators
+++++++++++++++++++++++
The default streaming function is simply an async generator definition.
Every value *yielded* from the generator is delivered to the calling
portal exactly like if you had invoked the function in-process meaning
you can ``async for`` to receive each value on the calling side.
As an example here's a parent actor that streams for 1 second from a
spawned subactor:
.. literalinclude:: ../examples/asynchronous_generators.py
By default async generator functions are treated as inter-actor
*streams* when invoked via a portal (how else could you really interface
with them anyway) so no special syntax to denote the streaming *service*
is necessary.
Channels and Contexts
+++++++++++++++++++++
If you aren't fond of having to write an async generator to stream data
between actors (or need something more flexible) you can instead use
a ``Context``. A context wraps an actor-local spawned task and
a ``Channel`` so that tasks executing across multiple processes can
stream data to one another using a low level, request oriented API.
A ``Channel`` wraps an underlying *transport* and *interchange* format
to enable *inter-actor-communication*. In its present state ``tractor``
uses TCP and msgpack_.
As an example if you wanted to create a streaming server without writing
an async generator that *yields* values you instead define a decorated
async function:
.. code:: python .. code:: python
from itertools import repeat @tractor.stream
import trio async def streamer(ctx: tractor.Context, rate: int = 2) -> None:
import tractor """A simple web response streaming server.
"""
while True:
val = await web_request('http://data.feed.com')
# this is the same as ``yield`` in the async gen case
await ctx.send_yield(val)
await trio.sleep(1 / rate)
async def stream_forever(): You must decorate the function with ``@tractor.stream`` and declare
for i in repeat("I can see these little future bubble things"): a ``ctx`` argument as the first in your function signature and then
# each yielded value is sent over the ``Channel`` to the ``tractor`` will treat the async function like an async generator - as
# parent actor a stream from the calling/client side.
yield i
await trio.sleep(0.01) This turns out to be handy particularly if you have multiple tasks
pushing responses concurrently:
.. code:: python
async def streamer(
ctx: tractor.Context,
rate: int = 2
) -> None:
"""A simple web response streaming server.
"""
while True:
val = await web_request(url)
# this is the same as ``yield`` in the async gen case
await ctx.send_yield(val)
await trio.sleep(1 / rate)
async def main(): @tractor.stream
# stream for at most 1 seconds async def stream_multiple_sources(
with trio.move_on_after(1) as cancel_scope: ctx: tractor.Context,
async with tractor.open_nursery() as n: sources: List[str]
portal = await n.start_actor( ) -> None:
f'donny', async with trio.open_nursery() as n:
rpc_module_paths=[__name__], for url in sources:
) n.start_soon(streamer, ctx, url)
# this async for loop streams values from the above
# async generator running in a separate process
async for letter in await portal.run(__name__, 'stream_forever'):
print(letter)
# we support trio's cancellation system
assert cancel_scope.cancelled_caught
assert n.cancelled
tractor.run(main) The context notion comes from the context_ in nanomsg_.
.. _context: https://nanomsg.github.io/nng/man/tip/nng_ctx.5
.. _msgpack: https://en.wikipedia.org/wiki/MessagePack
A full fledged streaming service A full fledged streaming service
******************************** ++++++++++++++++++++++++++++++++
Alright, let's get fancy. Alright, let's get fancy.
Say you wanted to spawn two actors which each pull data feeds from Say you wanted to spawn two actors which each pull data feeds from
@ -341,102 +368,7 @@ You also want to aggregate these feeds, do some processing on them and then
deliver the final result stream to a client (or in this case parent) actor deliver the final result stream to a client (or in this case parent) actor
and print the results to your screen: and print the results to your screen:
.. code:: python .. literalinclude:: ../examples/full_fledged_streaming_service.py
import time
import trio
import tractor
# this is the first 2 actors, streamer_1 and streamer_2
async def stream_data(seed):
for i in range(seed):
yield i
await trio.sleep(0) # trigger scheduler
# this is the third actor; the aggregator
async def aggregate(seed):
"""Ensure that the two streams we receive match but only stream
a single set of values to the parent.
"""
async with tractor.open_nursery() as nursery:
portals = []
for i in range(1, 3):
# fork point
portal = await nursery.start_actor(
name=f'streamer_{i}',
rpc_module_paths=[__name__],
)
portals.append(portal)
send_chan, recv_chan = trio.open_memory_channel(500)
async def push_to_q(portal):
async for value in await portal.run(
__name__, 'stream_data', seed=seed
):
# leverage trio's built-in backpressure
await send_chan.send(value)
await send_chan.send(None)
print(f"FINISHED ITERATING {portal.channel.uid}")
# spawn 2 trio tasks to collect streams and push to a local queue
async with trio.open_nursery() as n:
for portal in portals:
n.start_soon(push_to_q, portal)
unique_vals = set()
async for value in recv_chan:
if value not in unique_vals:
unique_vals.add(value)
# yield upwards to the spawning parent actor
yield value
if value is None:
break
assert value in unique_vals
print("FINISHED ITERATING in aggregator")
await nursery.cancel()
print("WAITING on `ActorNursery` to finish")
print("AGGREGATOR COMPLETE!")
# this is the main actor and *arbiter*
async def main():
# a nursery which spawns "actors"
async with tractor.open_nursery() as nursery:
seed = int(1e3)
import time
pre_start = time.time()
portal = await nursery.run_in_actor(
'aggregator',
aggregate,
seed=seed,
)
start = time.time()
# the portal call returns exactly what you'd expect
# as if the remote "aggregate" function was called locally
result_stream = []
async for value in await portal.result():
result_stream.append(value)
print(f"STREAM TIME = {time.time() - start}")
print(f"STREAM + SPAWN TIME = {time.time() - pre_start}")
assert result_stream == list(range(seed)) + [None]
return result_stream
final_stream = tractor.run(main, arbiter_addr=('127.0.0.1', 1616))
Here there's four actors running in separate processes (using all the Here there's four actors running in separate processes (using all the
cores on you machine). Two are streaming by *yielding* values from the cores on you machine). Two are streaming by *yielding* values from the
@ -450,96 +382,67 @@ as ``multiprocessing`` calls it) which is running ``main()``.
https://trio.readthedocs.io/en/latest/reference-core.html#getting-back-into-the-trio-thread-from-another-thread https://trio.readthedocs.io/en/latest/reference-core.html#getting-back-into-the-trio-thread-from-another-thread
.. _asynchronous generators: https://www.python.org/dev/peps/pep-0525/ .. _asynchronous generators: https://www.python.org/dev/peps/pep-0525/
.. _remote function execution: https://codespeak.net/execnet/example/test_info.html#remote-exec-a-function-avoiding-inlined-source-part-i .. _remote function execution: https://codespeak.net/execnet/example/test_info.html#remote-exec-a-function-avoiding-inlined-source-part-i
.. _asyncitertools: https://github.com/vodik/asyncitertools
Cancellation Actor local (aka *process global*) variables
************ ********************************************
``tractor`` supports ``trio``'s cancellation_ system verbatim. Although ``tractor`` uses a *shared-nothing* architecture between
Cancelling a nursery block cancels all actors spawned by it. processes you can of course share state between tasks running *within*
Eventually ``tractor`` plans to support different `supervision strategies`_ like ``erlang``. an actor (since a `trio.run()` runtime is single threaded). ``trio``
tasks spawned via multiple RPC calls to an actor can modify
.. _supervision strategies: http://erlang.org/doc/man/supervisor.html#sup_flags *process-global-state* defined using Python module attributes:
Remote error propagation
************************
Any task invoked in a remote actor should ship any error(s) back to the calling
actor where it is raised and expected to be dealt with. This way remote actors
are never cancelled unless explicitly asked or there's a bug in ``tractor`` itself.
.. code:: python
async def assert_err():
assert 0
async def main():
async with tractor.open_nursery() as n:
real_actors = []
for i in range(3):
real_actors.append(await n.start_actor(
f'actor_{i}',
rpc_module_paths=[__name__],
))
# start one actor that will fail immediately
await n.run_in_actor('extra', assert_err)
# should error here with a ``RemoteActorError`` containing
# an ``AssertionError`` and all the other actors have been cancelled
try:
# also raises
tractor.run(main)
except tractor.RemoteActorError:
print("Look Maa that actor failed hard, hehhh!")
You'll notice the nursery cancellation conducts a *one-cancels-all*
supervisory strategy `exactly like trio`_. The plan is to add more
`erlang strategies`_ in the near future by allowing nurseries to accept
a ``Supervisor`` type.
.. _exactly like trio: https://trio.readthedocs.io/en/latest/reference-core.html#cancellation-semantics
.. _erlang strategies: http://learnyousomeerlang.com/supervisors
Actor local variables
*********************
Although ``tractor`` uses a *shared-nothing* architecture between processes
you can of course share state between tasks running *within* an actor.
``trio`` tasks spawned via multiple RPC calls to an actor can access global
state using the per actor ``statespace`` dictionary:
.. code:: python .. code:: python
statespace = {'doggy': 10} # a per process cache
_actor_cache: dict[str, bool] = {}
def check_statespace(): def ping_endpoints(endpoints: List[str]):
# Remember this runs in a new process so no changes """Start a polling process which runs completely separate
# will propagate back to the parent actor from our root actor/process.
assert tractor.current_actor().statespace == statespace
"""
# This runs in a new process so no changes # will propagate
# back to the parent actor
while True:
for ep in endpoints:
status = await check_endpoint_is_up(ep)
_actor_cache[ep] = status
await trio.sleep(0.5)
async def get_alive_endpoints():
nonlocal _actor_cache
return {key for key, value in _actor_cache.items() if value}
async def main(): async def main():
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
await n.run_in_actor(
'checker', portal = await n.run_in_actor(ping_endpoints)
check_statespace,
statespace=statespace # print the alive endpoints after 3 seconds
) await trio.sleep(3)
# this is submitted to be run in our "ping_endpoints" actor
print(await portal.run(get_alive_endpoints))
Of course you don't have to use the ``statespace`` variable (it's mostly You can pass any kind of (`msgpack`) serializable data between actors using
a convenience for passing simple data to newly spawned actors); building function call semantics but building out a state sharing system per-actor
out a state sharing system per-actor is totally up to you. is totally up to you.
How do actors find each other (a poor man's *service discovery*)? Service Discovery
***************************************************************** *****************
Though it will be built out much more in the near future, ``tractor`` Though it will be built out much more in the near future, ``tractor``
currently keeps track of actors by ``(name: str, id: str)`` using a currently keeps track of actors by ``(name: str, id: str)`` using a
special actor called the *arbiter*. Currently the *arbiter* must exist special actor called the *arbiter*. Currently the *arbiter* must exist
@ -551,84 +454,10 @@ now it does the trick.
To find the arbiter from the current actor use the ``get_arbiter()`` function and to To find the arbiter from the current actor use the ``get_arbiter()`` function and to
find an actor's socket address by name use the ``find_actor()`` function: find an actor's socket address by name use the ``find_actor()`` function:
.. code:: python .. literalinclude:: ../examples/service_discovery.py
import tractor
async def main(service_name):
async with tractor.get_arbiter() as portal:
print(f"Arbiter is listening on {portal.channel}")
async with tractor.find_actor(service_name) as sockaddr:
print(f"my_service is found at {my_service}")
tractor.run(main, service_name)
The ``name`` value you should pass to ``find_actor()`` is the one you passed as the The ``name`` value you should pass to ``find_actor()`` is the one you passed as the
*first* argument to either ``tractor.run()`` or ``ActorNursery.start_actor()``. *first* argument to either ``trio.run()`` or ``ActorNursery.start_actor()``.
Streaming using channels and contexts
*************************************
``Channel`` is the API which wraps an underlying *transport* and *interchange*
format to enable *inter-actor-communication*. In its present state ``tractor``
uses TCP and msgpack_.
If you aren't fond of having to write an async generator to stream data
between actors (or need something more flexible) you can instead use a
``Context``. A context wraps an actor-local spawned task and a ``Channel``
so that tasks executing across multiple processes can stream data
to one another using a low level, request oriented API.
As an example if you wanted to create a streaming server without writing
an async generator that *yields* values you instead define an async
function:
.. code:: python
async def streamer(ctx, rate=2):
"""A simple web response streaming server.
"""
while True:
val = await web_request('http://data.feed.com')
# this is the same as ``yield`` in the async gen case
await ctx.send_yield(val)
await trio.sleep(1 / rate)
All that's required is declaring a ``ctx`` argument name somewhere in
your function signature and ``tractor`` will treat the async function
like an async generator - as a streaming function from the client side.
This turns out to be handy particularly if you have
multiple tasks streaming responses concurrently:
.. code:: python
async def streamer(ctx, url, rate=2):
"""A simple web response streaming server.
"""
while True:
val = await web_request(url)
# this is the same as ``yield`` in the async gen case
await ctx.send_yield(val)
await trio.sleep(1 / rate)
async def stream_multiple_sources(ctx, sources):
async with trio.open_nursery() as n:
for url in sources:
n.start_soon(streamer, ctx, url)
The context notion comes from the context_ in nanomsg_.
Running actors standalone Running actors standalone
@ -642,7 +471,104 @@ need to hop into a debugger. You just need to pass the existing
.. code:: python .. code:: python
tractor.run(main, arbiter_addr=('192.168.0.10', 1616)) import trio
import tractor
async def main():
async with tractor.open_root_actor(
arbiter_addr=('192.168.0.10', 1616)
):
await trio.sleep_forever()
trio.run(main)
Choosing a process spawning backend
***********************************
``tractor`` is architected to support multiple actor (sub-process)
spawning backends. Specific defaults are chosen based on your system
but you can also explicitly select a backend of choice at startup
via a ``start_method`` kwarg to ``tractor.open_nursery()``.
Currently the options available are:
- ``trio``: a ``trio``-native spawner which is an async wrapper around ``subprocess``
- ``spawn``: one of the stdlib's ``multiprocessing`` `start methods`_
- ``forkserver``: a faster ``multiprocessing`` variant that is Unix only
.. _start methods: https://docs.python.org/3.8/library/multiprocessing.html#contexts-and-start-methods
``trio``
++++++++
The ``trio`` backend offers a lightweight async wrapper around ``subprocess`` from the standard library and takes advantage of the ``trio.`` `open_process`_ API.
.. _open_process: https://trio.readthedocs.io/en/stable/reference-io.html#spawning-subprocesses
``multiprocessing``
+++++++++++++++++++
There is support for the stdlib's ``multiprocessing`` `start methods`_.
Note that on Windows *spawn* it the only supported method and on \*nix
systems *forkserver* is the best method for speed but has the caveat
that it will break easily (hangs due to broken pipes) if spawning actors
using nested nurseries.
In general, the ``multiprocessing`` backend **has not proven reliable**
for handling errors from actors more then 2 nurseries *deep* (see `#89`_).
If you for some reason need this consider sticking with alternative
backends.
.. _#89: https://github.com/goodboy/tractor/issues/89
.. _windowsgotchas:
Windows "gotchas"
^^^^^^^^^^^^^^^^^
On Windows (which requires the use of the stdlib's `multiprocessing`
package) there are some gotchas. Namely, the need for calling
`freeze_support()`_ inside the ``__main__`` context. Additionally you
may need place you `tractor` program entry point in a seperate
`__main__.py` module in your package in order to avoid an error like the
following ::
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
return getattr(self._mods[ns], funcname)
KeyError: '__mp_main__'
To avoid this, the following is the **only code** that should be in your
main python module of the program:
.. code:: python
# application/__main__.py
import trio
import tractor
import multiprocessing
from . import tractor_app
if __name__ == '__main__':
multiprocessing.freeze_support()
trio.run(tractor_app.main)
And execute as::
python -m application
As an example we use the following code to test all documented examples
in the test suite on windows:
.. literalinclude:: ../examples/__main__.py
See `#61`_ and `#79`_ for further details.
.. _freeze_support(): https://docs.python.org/3/library/multiprocessing.html#multiprocessing.freeze_support
.. _#61: https://github.com/goodboy/tractor/pull/61#issuecomment-470053512
.. _#79: https://github.com/goodboy/tractor/pull/79
Enabling logging Enabling logging
@ -665,6 +591,7 @@ What the future holds
--------------------- ---------------------
Stuff I'd like to see ``tractor`` do real soon: Stuff I'd like to see ``tractor`` do real soon:
- TLS_, duh.
- erlang-like supervisors_ - erlang-like supervisors_
- native support for `nanomsg`_ as a channel transport - native support for `nanomsg`_ as a channel transport
- native `gossip protocol`_ support for service discovery and arbiter election - native `gossip protocol`_ support for service discovery and arbiter election
@ -673,21 +600,13 @@ Stuff I'd like to see ``tractor`` do real soon:
but with better `pdb++`_ support but with better `pdb++`_ support
- an extensive `chaos engineering`_ test suite - an extensive `chaos engineering`_ test suite
- support for reactive programming primitives and native support for asyncitertools_ like libs - support for reactive programming primitives and native support for asyncitertools_ like libs
- introduction of a `capability-based security`_ model
.. _TLS: https://trio.readthedocs.io/en/latest/reference-io.html#ssl-tls-support
Feel like saying hi? .. _supervisors: https://github.com/goodboy/tractor/issues/22
--------------------
This project is very much coupled to the ongoing development of
``trio`` (i.e. ``tractor`` gets all its ideas from that brilliant
community). If you want to help, have suggestions or just want to
say hi, please feel free to ping me on the `trio gitter channel`_!
.. _supervisors: https://github.com/tgoodlet/tractor/issues/22
.. _nanomsg: https://nanomsg.github.io/nng/index.html .. _nanomsg: https://nanomsg.github.io/nng/index.html
.. _context: https://nanomsg.github.io/nng/man/tip/nng_ctx.5
.. _gossip protocol: https://en.wikipedia.org/wiki/Gossip_protocol .. _gossip protocol: https://en.wikipedia.org/wiki/Gossip_protocol
.. _trio gitter channel: https://gitter.im/python-trio/general
.. _celery: http://docs.celeryproject.org/en/latest/userguide/debugging.html .. _celery: http://docs.celeryproject.org/en/latest/userguide/debugging.html
.. _asyncitertools: https://github.com/vodik/asyncitertools
.. _pdb++: https://github.com/antocuni/pdb .. _pdb++: https://github.com/antocuni/pdb
.. _msgpack: https://en.wikipedia.org/wiki/MessagePack .. _capability-based security: https://en.wikipedia.org/wiki/Capability-based_security

View File

@ -0,0 +1,4 @@
#!/bin/bash
sphinx-build -b rst ./github_readme ./
mv _sphinx_readme.rst _README.rst

View File

View File

@ -0,0 +1,19 @@
"""
Needed on Windows.
This module is needed as the program entry point for invocation
with ``python -m <modulename>``. See the solution from @chrizzFTD
here:
https://github.com/goodboy/tractor/pull/61#issuecomment-470053512
"""
if __name__ == '__main__':
import multiprocessing
multiprocessing.freeze_support()
# ``tests/test_docs_examples.py::test_example`` will copy each
# script from this examples directory into a module in a new
# temporary dir and name it test_example.py. We import that script
# module here and invoke it's ``main()``.
from . import test_example
test_example.trio.run(test_example.main)

View File

@ -0,0 +1,44 @@
import trio
import tractor
_this_module = __name__
the_line = 'Hi my name is {}'
tractor.log.get_console_log("INFO")
async def hi():
return the_line.format(tractor.current_actor().name)
async def say_hello(other_actor):
async with tractor.wait_for_actor(other_actor) as portal:
return await portal.run(hi)
async def main():
"""Main tractor entry point, the "master" process (for now
acts as the "director").
"""
async with tractor.open_nursery() as n:
print("Alright... Action!")
donny = await n.run_in_actor(
say_hello,
name='donny',
# arguments are always named
other_actor='gretchen',
)
gretchen = await n.run_in_actor(
say_hello,
name='gretchen',
other_actor='donny',
)
print(await gretchen.result())
print(await donny.result())
print("CUTTTT CUUTT CUT!!! Donny!! You're supposed to say...")
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,27 @@
import trio
import tractor
async def cellar_door():
assert not tractor.is_root_process()
return "Dang that's beautiful"
async def main():
"""The main ``tractor`` routine.
"""
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
cellar_door,
name='some_linguist',
)
# The ``async with`` will unblock here since the 'some_linguist'
# actor has completed its main task ``cellar_door``.
print(await portal.result())
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,34 @@
import trio
import tractor
async def movie_theatre_question():
"""A question asked in a dark theatre, in a tangent
(errr, I mean different) process.
"""
return 'have you ever seen a portal?'
async def main():
"""The main ``tractor`` routine.
"""
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'frank',
# enable the actor to run funcs from this current module
enable_modules=[__name__],
)
print(await portal.run(movie_theatre_question))
# call the subactor a 2nd time
print(await portal.run(movie_theatre_question))
# the async with will block here indefinitely waiting
# for our actor "frank" to complete, but since it's an
# "outlive_main" actor it will never end until cancelled
await portal.cancel_actor()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,151 @@
'''
Complex edge case where during real-time streaming the IPC tranport
channels are wiped out (purposely in this example though it could have
been an outage) and we want to ensure that despite being in debug mode
(or not) the user can sent SIGINT once they notice the hang and the
actor tree will eventually be cancelled without leaving any zombies.
'''
import trio
from tractor import (
open_nursery,
context,
Context,
MsgStream,
)
async def break_channel_silently_then_error(
stream: MsgStream,
):
async for msg in stream:
await stream.send(msg)
# XXX: close the channel right after an error is raised
# purposely breaking the IPC transport to make sure the parent
# doesn't get stuck in debug or hang on the connection join.
# this more or less simulates an infinite msg-receive hang on
# the other end.
await stream._ctx.chan.send(None)
assert 0
async def close_stream_and_error(
stream: MsgStream,
):
async for msg in stream:
await stream.send(msg)
# wipe out channel right before raising
await stream._ctx.chan.send(None)
await stream.aclose()
assert 0
@context
async def recv_and_spawn_net_killers(
ctx: Context,
break_ipc_after: bool | int = False,
) -> None:
'''
Receive stream msgs and spawn some IPC killers mid-stream.
'''
await ctx.started()
async with (
ctx.open_stream() as stream,
trio.open_nursery() as n,
):
async for i in stream:
print(f'child echoing {i}')
await stream.send(i)
if (
break_ipc_after
and i > break_ipc_after
):
'#################################\n'
'Simulating child-side IPC BREAK!\n'
'#################################'
n.start_soon(break_channel_silently_then_error, stream)
n.start_soon(close_stream_and_error, stream)
async def main(
debug_mode: bool = False,
start_method: str = 'trio',
# by default we break the parent IPC first (if configured to break
# at all), but this can be changed so the child does first (even if
# both are set to break).
break_parent_ipc_after: int | bool = False,
break_child_ipc_after: int | bool = False,
) -> None:
async with (
open_nursery(
start_method=start_method,
# NOTE: even debugger is used we shouldn't get
# a hang since it never engages due to broken IPC
debug_mode=debug_mode,
loglevel='warning',
) as an,
):
portal = await an.start_actor(
'chitty_hijo',
enable_modules=[__name__],
)
async with portal.open_context(
recv_and_spawn_net_killers,
break_ipc_after=break_child_ipc_after,
) as (ctx, sent):
async with ctx.open_stream() as stream:
for i in range(1000):
if (
break_parent_ipc_after
and i > break_parent_ipc_after
):
print(
'#################################\n'
'Simulating parent-side IPC BREAK!\n'
'#################################'
)
await stream._ctx.chan.send(None)
# it actually breaks right here in the
# mp_spawn/forkserver backends and thus the zombie
# reaper never even kicks in?
print(f'parent sending {i}')
await stream.send(i)
with trio.move_on_after(2) as cs:
# NOTE: in the parent side IPC failure case this
# will raise an ``EndOfChannel`` after the child
# is killed and sends a stop msg back to it's
# caller/this-parent.
rx = await stream.receive()
print(f"I'm a happy user and echoed to me is {rx}")
if cs.cancelled_caught:
# pretend to be a user seeing no streaming action
# thinking it's a hang, and then hitting ctl-c..
print("YOO i'm a user anddd thingz hangin..")
print(
"YOO i'm mad send side dun but thingz hangin..\n"
'MASHING CTlR-C Ctl-c..'
)
raise KeyboardInterrupt
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,42 @@
from typing import AsyncIterator
from itertools import repeat
import trio
import tractor
async def stream_forever() -> AsyncIterator[int]:
for i in repeat("I can see these little future bubble things"):
# each yielded value is sent over the ``Channel`` to the parent actor
yield i
await trio.sleep(0.01)
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'donny',
enable_modules=[__name__],
)
# this async for loop streams values from the above
# async generator running in a separate process
async with portal.open_stream_from(stream_forever) as stream:
count = 0
async for letter in stream:
print(letter)
count += 1
if count > 50:
break
print('stream terminated')
await portal.cancel_actor()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,54 @@
'''
Fast fail test with a context.
Ensure the partially initialized sub-actor process
doesn't cause a hang on error/cancel of the parent
nursery.
'''
import trio
import tractor
@tractor.context
async def sleep(
ctx: tractor.Context,
):
await trio.sleep(0.5)
await ctx.started()
await trio.sleep_forever()
async def open_ctx(
n: tractor._supervise.ActorNursery
):
# spawn both actors
portal = await n.start_actor(
name='sleeper',
enable_modules=[__name__],
)
async with portal.open_context(
sleep,
) as (ctx, first):
assert first is None
async def main():
async with tractor.open_nursery(
debug_mode=True,
loglevel='runtime',
) as an:
async with trio.open_nursery() as n:
n.start_soon(open_ctx, an)
await trio.sleep(0.2)
await trio.sleep(0.1)
assert 0
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,45 @@
import tractor
import trio
async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor."
while True:
yield 'yo'
await tractor.breakpoint()
async def name_error():
"Raise a ``NameError``"
getattr(doggypants) # noqa
async def main():
"""Test breakpoint in a streaming actor.
"""
async with tractor.open_nursery(
debug_mode=True,
loglevel='error',
) as n:
p0 = await n.start_actor('bp_forever', enable_modules=[__name__])
p1 = await n.start_actor('name_error', enable_modules=[__name__])
# retreive results
async with p0.open_stream_from(breakpoint_forever) as stream:
# triggers the first name error
try:
await p1.run(name_error)
except tractor.RemoteActorError as rae:
assert rae.type is NameError
async for i in stream:
# a second time try the failing subactor and this tie
# let error propagate up to the parent/nursery.
await p1.run(name_error)
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,98 @@
import trio
import tractor
async def name_error():
"Raise a ``NameError``"
getattr(doggypants) # noqa
async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor."
while True:
await tractor.breakpoint()
# NOTE: if the test never sent 'q'/'quit' commands
# on the pdb repl, without this checkpoint line the
# repl would spin in this actor forever.
# await trio.sleep(0)
async def spawn_until(depth=0):
""""A nested nursery that triggers another ``NameError``.
"""
async with tractor.open_nursery() as n:
if depth < 1:
await n.run_in_actor(breakpoint_forever)
p = await n.run_in_actor(
name_error,
name='name_error'
)
await trio.sleep(0.5)
# rx and propagate error from child
await p.result()
else:
# recusrive call to spawn another process branching layer of
# the tree
depth -= 1
await n.run_in_actor(
spawn_until,
depth=depth,
name=f'spawn_until_{depth}',
)
async def main():
"""The main ``tractor`` routine.
The process tree should look as approximately as follows when the debugger
first engages:
python examples/debugging/multi_nested_subactors_bp_forever.py
python -m tractor._child --uid ('spawner1', '7eab8462 ...)
python -m tractor._child --uid ('spawn_until_3', 'afcba7a8 ...)
python -m tractor._child --uid ('spawn_until_2', 'd2433d13 ...)
python -m tractor._child --uid ('spawn_until_1', '1df589de ...)
python -m tractor._child --uid ('spawn_until_0', '3720602b ...)
python -m tractor._child --uid ('spawner0', '1d42012b ...)
python -m tractor._child --uid ('spawn_until_2', '2877e155 ...)
python -m tractor._child --uid ('spawn_until_1', '0502d786 ...)
python -m tractor._child --uid ('spawn_until_0', 'de918e6d ...)
"""
async with tractor.open_nursery(
debug_mode=True,
# loglevel='cancel',
) as n:
# spawn both actors
portal = await n.run_in_actor(
spawn_until,
depth=3,
name='spawner0',
)
portal1 = await n.run_in_actor(
spawn_until,
depth=4,
name='spawner1',
)
# TODO: test this case as well where the parent don't see
# the sub-actor errors by default and instead expect a user
# ctrl-c to kill the root.
with trio.move_on_after(3):
await trio.sleep_forever()
# gah still an issue here.
await portal.result()
# should never get here
await portal1.result()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,66 @@
'''
Test that a nested nursery will avoid clobbering
the debugger latched by a broken child.
'''
import trio
import tractor
async def name_error():
"Raise a ``NameError``"
getattr(doggypants) # noqa
async def spawn_error():
""""A nested nursery that triggers another ``NameError``.
"""
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
name_error,
name='name_error_1',
)
return await portal.result()
async def main():
"""The main ``tractor`` routine.
The process tree should look as approximately as follows:
python examples/debugging/multi_subactors.py
python -m tractor._child --uid ('name_error', 'a7caf490 ...)
`-python -m tractor._child --uid ('spawn_error', '52ee14a5 ...)
`-python -m tractor._child --uid ('name_error', '3391222c ...)
Order of failure:
- nested name_error sub-sub-actor
- root actor should then fail on assert
- program termination
"""
async with tractor.open_nursery(
debug_mode=True,
# loglevel='cancel',
) as n:
# spawn both actors
portal = await n.run_in_actor(
name_error,
name='name_error',
)
portal1 = await n.run_in_actor(
spawn_error,
name='spawn_error',
)
# trigger a root actor error
assert 0
# attempt to collect results (which raises error in parent)
# still has some issues where the parent seems to get stuck
await portal.result()
await portal1.result()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,52 @@
import tractor
import trio
async def breakpoint_forever():
"Indefinitely re-enter debugger in child actor."
while True:
await trio.sleep(0.1)
await tractor.breakpoint()
async def name_error():
"Raise a ``NameError``"
getattr(doggypants) # noqa
async def spawn_error():
""""A nested nursery that triggers another ``NameError``.
"""
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
name_error,
name='name_error_1',
)
return await portal.result()
async def main():
"""The main ``tractor`` routine.
The process tree should look as approximately as follows:
-python examples/debugging/multi_subactors.py
|-python -m tractor._child --uid ('name_error', 'a7caf490 ...)
|-python -m tractor._child --uid ('bp_forever', '1f787a7e ...)
`-python -m tractor._child --uid ('spawn_error', '52ee14a5 ...)
`-python -m tractor._child --uid ('name_error', '3391222c ...)
"""
async with tractor.open_nursery(
debug_mode=True,
) as n:
# Spawn both actors, don't bother with collecting results
# (would result in a different debugger outcome due to parent's
# cancellation).
await n.run_in_actor(breakpoint_forever)
await n.run_in_actor(name_error)
await n.run_in_actor(spawn_error)
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,40 @@
import trio
import tractor
@tractor.context
async def just_sleep(
ctx: tractor.Context,
**kwargs,
) -> None:
'''
Start and sleep.
'''
await ctx.started()
await trio.sleep_forever()
async def main() -> None:
async with tractor.open_nursery(
debug_mode=True,
) as n:
portal = await n.start_actor(
'ctx_child',
# XXX: we don't enable the current module in order
# to trigger `ModuleNotFound`.
enable_modules=[],
)
async with portal.open_context(
just_sleep, # taken from pytest parameterization
) as (ctx, sent):
raise KeyboardInterrupt
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,27 @@
import trio
import tractor
async def die():
raise RuntimeError
async def main():
async with tractor.open_nursery() as tn:
debug_actor = await tn.start_actor(
'debugged_boi',
enable_modules=[__name__],
debug_mode=True,
)
crash_boi = await tn.start_actor(
'crash_boi',
enable_modules=[__name__],
# debug_mode=True,
)
async with trio.open_nursery() as n:
n.start_soon(debug_actor.run, die)
n.start_soon(crash_boi.run, die)
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,24 @@
import os
import sys
import trio
import tractor
async def main() -> None:
async with tractor.open_nursery(debug_mode=True) as an:
assert os.environ['PYTHONBREAKPOINT'] == 'tractor._debug._set_trace'
# TODO: an assert that verifies the hook has indeed been, hooked
# XD
assert sys.breakpointhook is not tractor._debug._set_trace
breakpoint()
# TODO: an assert that verifies the hook is unhooked..
assert sys.breakpointhook
breakpoint()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,19 @@
import trio
import tractor
async def main():
async with tractor.open_root_actor(
debug_mode=True,
):
await trio.sleep(0.1)
await tractor.breakpoint()
await trio.sleep(0.1)
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,15 @@
import trio
import tractor
async def main():
async with tractor.open_root_actor(
debug_mode=True,
):
while True:
await tractor.breakpoint()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,13 @@
import trio
import tractor
async def main():
async with tractor.open_root_actor(
debug_mode=True,
):
assert 0
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,65 @@
import trio
import tractor
async def name_error():
"Raise a ``NameError``"
getattr(doggypants) # noqa
async def spawn_until(depth=0):
""""A nested nursery that triggers another ``NameError``.
"""
async with tractor.open_nursery() as n:
if depth < 1:
# await n.run_in_actor('breakpoint_forever', breakpoint_forever)
await n.run_in_actor(name_error)
else:
depth -= 1
await n.run_in_actor(
spawn_until,
depth=depth,
name=f'spawn_until_{depth}',
)
async def main():
"""The main ``tractor`` routine.
The process tree should look as approximately as follows when the debugger
first engages:
python examples/debugging/multi_nested_subactors_bp_forever.py
python -m tractor._child --uid ('spawner1', '7eab8462 ...)
python -m tractor._child --uid ('spawn_until_0', '3720602b ...)
python -m tractor._child --uid ('name_error', '505bf71d ...)
python -m tractor._child --uid ('spawner0', '1d42012b ...)
python -m tractor._child --uid ('name_error', '6c2733b8 ...)
"""
async with tractor.open_nursery(
debug_mode=True,
loglevel='warning'
) as n:
# spawn both actors
portal = await n.run_in_actor(
spawn_until,
depth=0,
name='spawner0',
)
portal1 = await n.run_in_actor(
spawn_until,
depth=1,
name='spawner1',
)
# nursery cancellation should be triggered due to propagated
# error from child.
await portal.result()
await portal1.result()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,31 @@
import trio
import tractor
async def key_error():
"Raise a ``NameError``"
return {}['doggy']
async def main():
"""Root dies
"""
async with tractor.open_nursery(
debug_mode=True,
loglevel='debug'
) as n:
# spawn both actors
portal = await n.run_in_actor(key_error)
# XXX: originally a bug caused by this is where root would enter
# the debugger and clobber the tty used by the repl even though
# child should have it locked.
with trio.fail_after(1):
await trio.Event().wait()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,50 @@
import tractor
import trio
async def gen():
yield 'yo'
await tractor.breakpoint()
yield 'yo'
await tractor.breakpoint()
@tractor.context
async def just_bp(
ctx: tractor.Context,
) -> None:
await ctx.started()
await tractor.breakpoint()
# TODO: bps and errors in this call..
async for val in gen():
print(val)
# await trio.sleep(0.5)
# prematurely destroy the connection
await ctx.chan.aclose()
# THIS CAUSES AN UNRECOVERABLE HANG
# without latest ``pdbpp``:
assert 0
async def main():
async with tractor.open_nursery(
debug_mode=True,
) as n:
p = await n.start_actor(
'bp_boi',
enable_modules=[__name__],
)
async with p.open_context(
just_bp,
) as (ctx, first):
await trio.sleep_forever()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,26 @@
import trio
import tractor
async def breakpoint_forever():
"""Indefinitely re-enter debugger in child actor.
"""
while True:
await trio.sleep(0.1)
await tractor.breakpoint()
async def main():
async with tractor.open_nursery(
debug_mode=True,
) as n:
portal = await n.run_in_actor(
breakpoint_forever,
)
await portal.result()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,19 @@
import trio
import tractor
async def name_error():
getattr(doggypants)
async def main():
async with tractor.open_nursery(
debug_mode=True,
) as n:
portal = await n.run_in_actor(name_error)
await portal.result()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,104 @@
import time
import trio
import tractor
# this is the first 2 actors, streamer_1 and streamer_2
async def stream_data(seed):
for i in range(seed):
yield i
await trio.sleep(0.0001) # trigger scheduler
# this is the third actor; the aggregator
async def aggregate(seed):
"""Ensure that the two streams we receive match but only stream
a single set of values to the parent.
"""
async with tractor.open_nursery() as nursery:
portals = []
for i in range(1, 3):
# fork point
portal = await nursery.start_actor(
name=f'streamer_{i}',
enable_modules=[__name__],
)
portals.append(portal)
send_chan, recv_chan = trio.open_memory_channel(500)
async def push_to_chan(portal, send_chan):
# TODO: https://github.com/goodboy/tractor/issues/207
async with send_chan:
async with portal.open_stream_from(stream_data, seed=seed) as stream:
async for value in stream:
# leverage trio's built-in backpressure
await send_chan.send(value)
print(f"FINISHED ITERATING {portal.channel.uid}")
# spawn 2 trio tasks to collect streams and push to a local queue
async with trio.open_nursery() as n:
for portal in portals:
n.start_soon(push_to_chan, portal, send_chan.clone())
# close this local task's reference to send side
await send_chan.aclose()
unique_vals = set()
async with recv_chan:
async for value in recv_chan:
if value not in unique_vals:
unique_vals.add(value)
# yield upwards to the spawning parent actor
yield value
assert value in unique_vals
print("FINISHED ITERATING in aggregator")
await nursery.cancel()
print("WAITING on `ActorNursery` to finish")
print("AGGREGATOR COMPLETE!")
# this is the main actor and *arbiter*
async def main():
# a nursery which spawns "actors"
async with tractor.open_nursery(
arbiter_addr=('127.0.0.1', 1616)
) as nursery:
seed = int(1e3)
pre_start = time.time()
portal = await nursery.start_actor(
name='aggregator',
enable_modules=[__name__],
)
async with portal.open_stream_from(
aggregate,
seed=seed,
) as stream:
start = time.time()
# the portal call returns exactly what you'd expect
# as if the remote "aggregate" function was called locally
result_stream = []
async for value in stream:
result_stream.append(value)
await portal.cancel_actor()
print(f"STREAM TIME = {time.time() - start}")
print(f"STREAM + SPAWN TIME = {time.time() - pre_start}")
assert result_stream == list(range(seed))
return result_stream
if __name__ == '__main__':
final_stream = trio.run(main)

View File

@ -0,0 +1,92 @@
'''
An SC compliant infected ``asyncio`` echo server.
'''
import asyncio
from statistics import mean
import time
import trio
import tractor
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
# a first message must be sent **from** this ``asyncio``
# task or the ``trio`` side will never unblock from
# ``tractor.to_asyncio.open_channel_from():``
to_trio.send_nowait('start')
# XXX: this uses an ``from_trio: asyncio.Queue`` currently but we
# should probably offer something better.
while True:
# echo the msg back
to_trio.send_nowait(await from_trio.get())
await asyncio.sleep(0)
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
):
# this will block until the ``asyncio`` task sends a "first"
# message.
async with tractor.to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
await chan.send(msg)
out = await chan.receive()
# echo back to parent actor-task
await stream.send(out)
async def main():
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_server',
enable_modules=[__name__],
infect_asyncio=True,
)
async with p.open_context(
trio_to_aio_echo_server,
) as (ctx, first):
assert first == 'start'
count = 0
async with ctx.open_stream() as stream:
delays = []
send = time.time()
await stream.send(count)
async for msg in stream:
recv = time.time()
delays.append(recv - send)
assert msg == count
count += 1
send = time.time()
await stream.send(count)
if count >= 1e3:
break
print(f'mean round trip rate (Hz): {1/mean(delays)}')
await p.cancel_actor()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,49 @@
import trio
import click
import tractor
import pydantic
# from multiprocessing import shared_memory
@tractor.context
async def just_sleep(
ctx: tractor.Context,
**kwargs,
) -> None:
'''
Test a small ping-pong 2-way streaming server.
'''
await ctx.started()
await trio.sleep_forever()
async def main() -> None:
proc = await trio.open_process( (
'python',
'-c',
'import trio; trio.run(trio.sleep_forever)',
))
await proc.wait()
# await trio.sleep_forever()
# async with tractor.open_nursery() as n:
# portal = await n.start_actor(
# 'rpc_server',
# enable_modules=[__name__],
# )
# async with portal.open_context(
# just_sleep, # taken from pytest parameterization
# ) as (ctx, sent):
# await trio.sleep_forever()
if __name__ == '__main__':
import time
# time.sleep(999)
trio.run(main)

View File

@ -0,0 +1,46 @@
import trio
import tractor
log = tractor.log.get_logger('multiportal')
async def stream_data(seed=10):
log.info("Starting stream task")
for i in range(seed):
yield i
await trio.sleep(0) # trigger scheduler
async def stream_from_portal(p, consumed):
async with p.open_stream_from(stream_data) as stream:
async for item in stream:
if item in consumed:
consumed.remove(item)
else:
consumed.append(item)
async def main():
async with tractor.open_nursery(loglevel='info') as an:
p = await an.start_actor('stream_boi', enable_modules=[__name__])
consumed = []
async with trio.open_nursery() as n:
for i in range(2):
n.start_soon(stream_from_portal, p, consumed)
# both streaming consumer tasks have completed and so we should
# have nothing in our list thanks to single threadedness
assert not consumed
await an.cancel()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,43 @@
import time
import concurrent.futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
start = time.time()
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('%d is prime: %s' % (number, prime))
print(f'processing took {time.time() - start} seconds')
if __name__ == '__main__':
start = time.time()
main()
print(f'script took {time.time() - start} seconds')

View File

@ -0,0 +1,119 @@
"""
Demonstration of the prime number detector example from the
``concurrent.futures`` docs:
https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example
This uses no extra threads, fancy semaphores or futures; all we need
is ``tractor``'s channels.
"""
from contextlib import asynccontextmanager
from typing import Callable
import itertools
import math
import time
import tractor
import trio
from async_generator import aclosing
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419,
]
async def is_prime(n):
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
@asynccontextmanager
async def worker_pool(workers=4):
"""Though it's a trivial special case for ``tractor``, the well
known "worker pool" seems to be the defacto "but, I want this
process pattern!" for most parallelism pilgrims.
Yes, the workers stay alive (and ready for work) until you close
the context.
"""
async with tractor.open_nursery() as tn:
portals = []
snd_chan, recv_chan = trio.open_memory_channel(len(PRIMES))
for i in range(workers):
# this starts a new sub-actor (process + trio runtime) and
# stores it's "portal" for later use to "submit jobs" (ugh).
portals.append(
await tn.start_actor(
f'worker_{i}',
enable_modules=[__name__],
)
)
async def _map(
worker_func: Callable[[int], bool],
sequence: list[int]
) -> list[bool]:
# define an async (local) task to collect results from workers
async def send_result(func, value, portal):
await snd_chan.send((value, await portal.run(func, n=value)))
async with trio.open_nursery() as n:
for value, portal in zip(sequence, itertools.cycle(portals)):
n.start_soon(
send_result,
worker_func,
value,
portal
)
# deliver results as they arrive
for _ in range(len(sequence)):
yield await recv_chan.receive()
# deliver the parallel "worker mapper" to user code
yield _map
# tear down all "workers" on pool close
await tn.cancel()
async def main():
async with worker_pool() as actor_map:
start = time.time()
async with aclosing(actor_map(is_prime, PRIMES)) as results:
async for number, prime in results:
print(f'{number} is prime: {prime}')
print(f'processing took {time.time() - start} seconds')
if __name__ == '__main__':
start = time.time()
trio.run(main)
print(f'script took {time.time() - start} seconds')

View File

@ -0,0 +1,43 @@
"""
Run with a process monitor from a terminal using::
$TERM -e watch -n 0.1 "pstree -a $$" \
& python examples/parallelism/single_func.py \
&& kill $!
"""
import os
import tractor
import trio
async def burn_cpu():
pid = os.getpid()
# burn a core @ ~ 50kHz
for _ in range(50000):
await trio.sleep(1/50000/50)
return os.getpid()
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(burn_cpu)
# burn rubber in the parent too
await burn_cpu()
# wait on result from target function
pid = await portal.result()
# end of nursery block
print(f"Collected subproc {pid}")
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,43 @@
"""
Run with a process monitor from a terminal using::
$TERM -e watch -n 0.1 "pstree -a $$" \
& python examples/parallelism/we_are_processes.py \
&& kill $!
"""
from multiprocessing import cpu_count
import os
import tractor
import trio
async def target():
print(
f"Yo, i'm '{tractor.current_actor().name}' "
f"running in pid {os.getpid()}"
)
await trio.sleep_forever()
async def main():
async with tractor.open_nursery() as n:
for i in range(cpu_count()):
await n.run_in_actor(target, name=f'worker_{i}')
print('This process tree will self-destruct in 1 sec...')
await trio.sleep(1)
# you could have done this yourself
raise Exception('Self Destructed')
if __name__ == '__main__':
try:
trio.run(main)
except Exception:
print('Zombies Contained')

View File

@ -0,0 +1,44 @@
import trio
import tractor
async def sleepy_jane():
uid = tractor.current_actor().uid
print(f'Yo i am actor {uid}')
await trio.sleep_forever()
async def main():
'''
Spawn a flat actor cluster, with one process per
detected core.
'''
portal_map: dict[str, tractor.Portal]
results: dict[str, str]
# look at this hip new syntax!
async with (
tractor.open_actor_cluster(
modules=[__name__]
) as portal_map,
trio.open_nursery() as n,
):
for (name, portal) in portal_map.items():
n.start_soon(portal.run, sleepy_jane)
await trio.sleep(0.5)
# kill the cluster with a cancel
raise KeyboardInterrupt
if __name__ == '__main__':
try:
trio.run(main)
except KeyboardInterrupt:
pass

View File

@ -0,0 +1,30 @@
import trio
import tractor
async def assert_err():
assert 0
async def main():
async with tractor.open_nursery() as n:
real_actors = []
for i in range(3):
real_actors.append(await n.start_actor(
f'actor_{i}',
enable_modules=[__name__],
))
# start one actor that will fail immediately
await n.run_in_actor(assert_err)
# should error here with a ``RemoteActorError`` containing
# an ``AssertionError`` and all the other actors have been cancelled
if __name__ == '__main__':
try:
# also raises
trio.run(main)
except tractor.RemoteActorError:
print("Look Maa that actor failed hard, hehhh!")

View File

@ -0,0 +1,72 @@
import trio
import tractor
@tractor.context
async def simple_rpc(
ctx: tractor.Context,
data: int,
) -> None:
'''Test a small ping-pong 2-way streaming server.
'''
# signal to parent that we're up much like
# ``trio_typing.TaskStatus.started()``
await ctx.started(data + 1)
async with ctx.open_stream() as stream:
count = 0
async for msg in stream:
assert msg == 'ping'
await stream.send('pong')
count += 1
else:
assert count == 10
async def main() -> None:
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'rpc_server',
enable_modules=[__name__],
)
# XXX: syntax requires py3.9
async with (
portal.open_context(
simple_rpc, # taken from pytest parameterization
data=10,
) as (ctx, sent),
ctx.open_stream() as stream,
):
assert sent == 11
count = 0
# receive msgs using async for style
await stream.send('ping')
async for msg in stream:
assert msg == 'pong'
await stream.send('ping')
count += 1
if count >= 9:
break
# explicitly teardown the daemon-actor
await portal.cancel_actor()
if __name__ == '__main__':
trio.run(main)

View File

@ -0,0 +1,22 @@
import trio
import tractor
tractor.log.get_console_log("INFO")
async def main(service_name):
async with tractor.open_nursery() as an:
await an.start_actor(service_name)
async with tractor.get_arbiter('127.0.0.1', 1616) as portal:
print(f"Arbiter is listening on {portal.channel}")
async with tractor.wait_for_actor(service_name) as sockaddr:
print(f"my_service is found at {sockaddr}")
await an.cancel()
if __name__ == '__main__':
trio.run(main, 'some_actor_name')

2
mypy.ini 100644
View File

@ -0,0 +1,2 @@
[mypy]
plugins = trio_typing.plugin

1
nooz/.gitignore vendored 100644
View File

@ -0,0 +1 @@
!.gitignore

View File

@ -0,0 +1,16 @@
Strictly support Python 3.10+, start runtime machinery reorg
Since we want to push forward using the new `match:` syntax for our
internal RPC-msg loops, we officially drop 3.9 support for the next
release which should coincide well with the first release of 3.11.
This patch set also officially removes the ``tractor.run()`` API (which
has been deprecated for some time) as well as starts an initial re-org
of the internal runtime core by:
- renaming ``tractor._actor`` -> ``._runtime``
- moving the ``._runtime.ActorActor._process_messages()`` and
``._async_main()`` to be module level singleton-task-functions since
they are only started once for each connection and actor spawn
respectively; this internal API thus looks more similar to (at the
time of writing) the ``trio``-internals in ``trio._core._run``.
- officially remove ``tractor.run()``, now deprecated for some time.

View File

@ -0,0 +1,4 @@
Only set `._debug.Lock.local_pdb_complete` if has been created.
This can be triggered by a very rare race condition (and thus we have no
working test yet) but it is known to exist in (a) consumer project(s).

View File

@ -0,0 +1,25 @@
Add support for ``trio >= 0.22`` and support for the new Python 3.11
``[Base]ExceptionGroup`` from `pep 654`_ via the backported
`exceptiongroup`_ package and some final fixes to the debug mode
subsystem.
This port ended up driving some (hopefully) final fixes to our debugger
subsystem including the solution to all lingering stdstreams locking
race-conditions and deadlock scenarios. This includes extending the
debugger tests suite as well as cancellation and ``asyncio`` mode cases.
Some of the notable details:
- always reverting to the ``trio`` SIGINT handler when leaving debug
mode.
- bypassing child attempts to acquire the debug lock when detected
to be amdist actor-runtime-cancellation.
- allowing the root actor to cancel local but IPC-stale subactor
requests-tasks for the debug lock when in a "no IPC peers" state.
Further we refined our ``ActorNursery`` semantics to be more similar to
``trio`` in the sense that parent task errors are always packed into the
actor-nursery emitted exception group and adjusted all tests and
examples accordingly.
.. _pep 654: https://peps.python.org/pep-0654/#handling-exception-groups
.. _exceptiongroup: https://github.com/python-trio/exceptiongroup

View File

@ -0,0 +1,5 @@
Establish an explicit "backend spawning" method table; use it from CI
More clearly lays out the current set of (3) backends: ``['trio',
'mp_spawn', 'mp_forkserver']`` and adjusts the ``._spawn.py`` internals
as well as the test suite to accommodate.

View File

@ -0,0 +1,4 @@
Add ``key: Callable[..., Hashable]`` support to ``.trionics.maybe_open_context()``
Gives users finer grained control over cache hit behaviour using
a callable which receives the input ``kwargs: dict``.

View File

@ -0,0 +1,41 @@
Add support for debug-lock blocking using a ``._debug.Lock._blocked:
set[tuple]`` and add ids when no-more IPC connections with the
root actor are detected.
This is an enhancement which (mostly) solves a lingering debugger
locking race case we needed to handle:
- child crashes acquires TTY lock in root and attaches to ``pdb``
- child IPC goes down such that all channels to the root are broken
/ non-functional.
- root is stuck thinking the child is still in debug even though it
can't be contacted and the child actor machinery hasn't been
cancelled by its parent.
- root get's stuck in deadlock with child since it won't send a cancel
request until the child is finished debugging (to avoid clobbering
a child that is actually using the debugger), but the child can't
unlock the debugger bc IPC is down and it can't contact the root.
To avoid this scenario add debug lock blocking list via
`._debug.Lock._blocked: set[tuple]` which holds actor uids for any actor
that is detected by the root as having no transport channel connections
(of which at least one should exist if this sub-actor at some point
acquired the debug lock). The root consequently checks this list for any
actor that tries to (re)acquire the lock and blocks with
a ``ContextCancelled``. Further, when a debug condition is tested in
``._runtime._invoke``, the context's ``._enter_debugger_on_cancel`` is
set to `False` if the actor was put on the block list then all
post-mortem / crash handling will be bypassed for that task.
In theory this approach to block list management may cause problems
where some nested child actor acquires and releases the lock multiple
times and it gets stuck on the block list after the first use? If this
turns out to be an issue we can try changing the strat so blocks are
only added when the root has zero IPC peers left?
Further, this adds a root-locking-task side cancel scope,
``Lock._root_local_task_cs_in_debug``, which can be ``.cancel()``-ed by the root
runtime when a stale lock is detected during the IPC channel testing.
However, right now we're NOT using this since it seems to cause test
failures likely due to causing pre-mature cancellation and maybe needs
a bit more experimenting?

View File

@ -0,0 +1,19 @@
Rework our ``.trionics.BroadcastReceiver`` internals to avoid method
recursion and approach a design and interface closer to ``trio``'s
``MemoryReceiveChannel``.
The details of the internal changes include:
- implementing a ``BroadcastReceiver.receive_nowait()`` and using it
within the async ``.receive()`` thus avoiding recursion from
``.receive()``.
- failing over to an internal ``._receive_from_underlying()`` when the
``_nowait()`` call raises ``trio.WouldBlock``
- adding ``BroadcastState.statistics()`` for debugging and testing both
internals and by users.
- add an internal ``BroadcastReceiver._raise_on_lag: bool`` which can be
set to avoid ``Lagged`` raising for possible use cases where a user
wants to choose between a [cheap or nasty
pattern](https://zguide.zeromq.org/docs/chapter7/#The-Cheap-or-Nasty-Pattern)
the the particular stream (we use this in ``piker``'s dark clearing
engine to avoid fast feeds breaking during HFT periods).

View File

@ -0,0 +1,11 @@
Always ``list``-cast the ``mngrs`` input to
``.trionics.gather_contexts()`` and ensure its size otherwise raise
a ``ValueError``.
Turns out that trying to pass an inline-style generator comprehension
doesn't seem to work inside the ``async with`` expression? Further, in
such a case we can get a hang waiting on the all-entered event
completion when the internal mngrs iteration is a noop. Instead we
always greedily check a size and error on empty input; the lazy
iteration of a generator input is not beneficial anyway since we're
entering all manager instances in concurrent tasks.

View File

@ -0,0 +1,15 @@
Fixes to ensure IPC (channel) breakage doesn't result in hung actor
trees; the zombie reaping and general supervision machinery will always
clean up and terminate.
This includes not only the (mostly minor) fixes to solve these cases but
also a new extensive test suite in `test_advanced_faults.py` with an
accompanying highly configurable example module-script in
`examples/advanced_faults/ipc_failure_during_stream.py`. Tests ensure we
never get hang or zombies despite operating in debug mode and attempt to
simulate all possible IPC transport failure cases for a local-host actor
tree.
Further we simplify `Context.open_stream.__aexit__()` to just call
`MsgStream.aclose()` directly more or less avoiding a pure duplicate
code path.

View File

@ -0,0 +1,10 @@
Always redraw the `pdbpp` prompt on `SIGINT` during REPL use.
There was recent changes todo with Python 3.10 that required us to pin
to a specific commit in `pdbpp` which have recently been fixed minus
this last issue with `SIGINT` shielding: not clobbering or not
showing the `(Pdb++)` prompt on ctlr-c by the user. This repairs all
that by firstly removing the standard KBI intercepting of the std lib's
`pdb.Pdb._cmdloop()` as well as ensuring that only the actor with REPL
control ever reports `SIGINT` handler log msgs and prompt redraws. With
this we move back to using pypi `pdbpp` release.

View File

@ -0,0 +1,7 @@
Drop `trio.Process.aclose()` usage, copy into our spawning code.
The details are laid out in https://github.com/goodboy/tractor/issues/330.
`trio` changed is process running quite some time ago, this just copies
out the small bit we needed (from the old `.aclose()`) for hard kills
where a soft runtime cancel request fails and our "zombie killer"
implementation kicks in.

View File

@ -0,0 +1,15 @@
Switch to using the fork & fix of `pdb++`, `pdbp`:
https://github.com/mdmintz/pdbp
Allows us to sidestep a variety of issues that aren't being maintained
in the upstream project thanks to the hard work of @mdmintz!
We also include some default settings adjustments as per recent
development on the fork:
- sticky mode is still turned on by default but now activates when
a using the `ll` repl command.
- turn off line truncation by default to avoid inter-line gaps when
resizing the terimnal during use.
- when using the backtrace cmd either by `w` or `bt`, the config
automatically switches to non-sticky mode.

8
nooz/HOWTO.rst 100644
View File

@ -0,0 +1,8 @@
See both the `towncrier docs`_ and the `pluggy release readme`_ for hot
tips. We basically have the most minimal setup and release process right
now and use the default `fragment set`_.
.. _towncrier docs: https://github.com/twisted/towncrier#quick-start
.. _pluggy release readme: https://github.com/pytest-dev/pluggy/blob/main/changelog/README.rst
.. _fragment set: https://github.com/twisted/towncrier#news-fragments

37
nooz/_template.rst 100644
View File

@ -0,0 +1,37 @@
{% for section in sections %}
{% set underline = "-" %}
{% if section %}
{{section}}
{{ underline * section|length }}{% set underline = "~" %}
{% endif %}
{% if sections[section] %}
{% for category, val in definitions.items() if category in sections[section] %}
{{ definitions[category]['name'] }}
{{ underline * definitions[category]['name']|length }}
{% if definitions[category]['showcontent'] %}
{% for text, values in sections[section][category]|dictsort(by='value') %}
{% set issue_joiner = joiner(', ') %}
- {% for value in values|sort %}{{ issue_joiner() }}`{{ value }} <https://github.com/goodboy/tractor/issues/{{ value[1:] }}>`_{% endfor %}: {{ text }}
{% endfor %}
{% else %}
- {{ sections[section][category]['']|sort|join(', ') }}
{% endif %}
{% if sections[section][category]|length == 0 %}
No significant changes.
{% else %}
{% endif %}
{% endfor %}
{% else %}
No significant changes.
{% endif %}
{% endfor %}

28
pyproject.toml 100644
View File

@ -0,0 +1,28 @@
[tool.towncrier]
package = "tractor"
filename = "NEWS.rst"
directory = "nooz/"
version = "0.1.0a6"
title_format = "tractor {version} ({project_date})"
template = "nooz/_template.rst"
all_bullets = true
[[tool.towncrier.type]]
directory = "feature"
name = "Features"
showcontent = true
[[tool.towncrier.type]]
directory = "bugfix"
name = "Bug Fixes"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"
showcontent = true
[[tool.towncrier.type]]
directory = "trivial"
name = "Trivial/Internal Changes"
showcontent = true

View File

@ -0,0 +1,2 @@
sphinx
sphinx_book_theme

View File

@ -1,4 +1,8 @@
pytest pytest
pytest-trio pytest-trio
pdbpp pytest-timeout
pdbp
mypy mypy
trio_typing
pexpect
towncrier

View File

@ -1,59 +1,97 @@
#!/usr/bin/env python #!/usr/bin/env python
# #
# tractor: a trionic actor model built on `multiprocessing` and `trio` # tractor: structured concurrent "actors".
# #
# Copyright (C) 2018 Tyler Goodlet # Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version. # (at your option) any later version.
# This program is distributed in the hope that it will be useful, # This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of # but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details. # GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from setuptools import setup from setuptools import setup
with open('README.rst', encoding='utf-8') as f: with open('docs/README.rst', encoding='utf-8') as f:
readme = f.read() readme = f.read()
setup( setup(
name="tractor", name="tractor",
version='0.1.0.alpha0', version='0.1.0a6dev0', # alpha zone
description='A trionic actor model built on `multiprocessing` and `trio`', description='structured concurrrent `trio`-"actors"',
long_description=readme, long_description=readme,
license='GPLv3', license='AGPLv3',
author='Tyler Goodlet', author='Tyler Goodlet',
maintainer='Tyler Goodlet', maintainer='Tyler Goodlet',
maintainer_email='tgoodlet@gmail.com', maintainer_email='goodboy_foss@protonmail.com',
url='https://github.com/tgoodlet/tractor', url='https://github.com/goodboy/tractor',
platforms=['linux'], platforms=['linux', 'windows'],
packages=[ packages=[
'tractor', 'tractor',
'tractor.testing', 'tractor.experimental',
'tractor.trionics',
], ],
install_requires=[ install_requires=[
'msgpack', 'trio>0.8', 'async_generator', 'colorlog', 'wrapt'],
# trio related
# proper range spec:
# https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/#id5
'trio >= 0.22',
'async_generator',
'trio_typing',
'exceptiongroup',
# tooling
'tricycle',
'trio_typing',
'colorlog',
'wrapt',
# IPC serialization
'msgspec',
# debug mode REPL
'pdbp',
# pip ref docs on these specs:
# https://pip.pypa.io/en/stable/reference/requirement-specifiers/#examples
# and pep:
# https://peps.python.org/pep-0440/#version-specifiers
# windows deps workaround for ``pdbpp``
# https://github.com/pdbpp/pdbpp/issues/498
# https://github.com/pdbpp/fancycompleter/issues/37
'pyreadline3 ; platform_system == "Windows"',
],
tests_require=['pytest'], tests_require=['pytest'],
python_requires=">=3.7", python_requires=">=3.10",
keywords=[ keywords=[
"async", "concurrency", "actor model", "distributed", 'trio',
'trio', 'multiprocessing' 'async',
'concurrency',
'structured concurrency',
'actor model',
'distributed',
'multiprocessing'
], ],
classifiers=[ classifiers=[
'Development Status :: 3 - Alpha', "Development Status :: 3 - Alpha",
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)' "Operating System :: POSIX :: Linux",
'Operating System :: POSIX :: Linux', "Operating System :: Microsoft :: Windows",
"Framework :: Trio", "Framework :: Trio",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.10",
"Intended Audience :: Science/Research", "Intended Audience :: Science/Research",
"Intended Audience :: Developers", "Intended Audience :: Developers",
"Topic :: System :: Distributed Computing", "Topic :: System :: Distributed Computing",

View File

@ -1,30 +1,250 @@
""" """
``tractor`` testing!! ``tractor`` testing!!
""" """
import sys
import subprocess
import os
import random import random
import signal
import platform
import pathlib
import time
import inspect
from functools import partial, wraps
import pytest import pytest
import trio
import tractor import tractor
from tractor.testing import tractor_test
pytest_plugins = ['pytester'] pytest_plugins = ['pytester']
def tractor_test(fn):
"""
Use:
@tractor_test
async def test_whatever():
await ...
If fixtures:
- ``arb_addr`` (a socket addr tuple where arbiter is listening)
- ``loglevel`` (logging level passed to tractor internals)
- ``start_method`` (subprocess spawning backend)
are defined in the `pytest` fixture space they will be automatically
injected to tests declaring these funcargs.
"""
@wraps(fn)
def wrapper(
*args,
loglevel=None,
arb_addr=None,
start_method=None,
**kwargs
):
# __tracebackhide__ = True
if 'arb_addr' in inspect.signature(fn).parameters:
# injects test suite fixture value to test as well
# as `run()`
kwargs['arb_addr'] = arb_addr
if 'loglevel' in inspect.signature(fn).parameters:
# allows test suites to define a 'loglevel' fixture
# that activates the internal logging
kwargs['loglevel'] = loglevel
if start_method is None:
if platform.system() == "Windows":
start_method = 'trio'
if 'start_method' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['start_method'] = start_method
if kwargs:
# use explicit root actor start
async def _main():
async with tractor.open_root_actor(
# **kwargs,
arbiter_addr=arb_addr,
loglevel=loglevel,
start_method=start_method,
# TODO: only enable when pytest is passed --pdb
# debug_mode=True,
):
await fn(*args, **kwargs)
main = _main
else:
# use implicit root actor start
main = partial(fn, *args, **kwargs)
return trio.run(main)
return wrapper
_arb_addr = '127.0.0.1', random.randint(1000, 9999) _arb_addr = '127.0.0.1', random.randint(1000, 9999)
# Sending signal.SIGINT on subprocess fails on windows. Use CTRL_* alternatives
if platform.system() == 'Windows':
_KILL_SIGNAL = signal.CTRL_BREAK_EVENT
_INT_SIGNAL = signal.CTRL_C_EVENT
_INT_RETURN_CODE = 3221225786
_PROC_SPAWN_WAIT = 2
else:
_KILL_SIGNAL = signal.SIGKILL
_INT_SIGNAL = signal.SIGINT
_INT_RETURN_CODE = 1 if sys.version_info < (3, 8) else -signal.SIGINT.value
_PROC_SPAWN_WAIT = 0.6 if sys.version_info < (3, 7) else 0.4
no_windows = pytest.mark.skipif(
platform.system() == "Windows",
reason="Test is unsupported on windows",
)
def repodir() -> pathlib.Path:
'''
Return the abspath to the repo directory.
'''
# 2 parents up to step up through tests/<repo_dir>
return pathlib.Path(__file__).parent.parent.absolute()
def examples_dir() -> pathlib.Path:
'''
Return the abspath to the examples directory as `pathlib.Path`.
'''
return repodir() / 'examples'
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addoption("--ll", action="store", dest='loglevel', parser.addoption(
default=None, help="logging level to set when testing") "--ll", action="store", dest='loglevel',
default='ERROR', help="logging level to set when testing"
)
parser.addoption(
"--spawn-backend", action="store", dest='spawn_backend',
default='trio',
help="Processing spawning backend to use for test run",
)
def pytest_configure(config):
backend = config.option.spawn_backend
tractor._spawn.try_set_start_method(backend)
@pytest.fixture(scope='session', autouse=True) @pytest.fixture(scope='session', autouse=True)
def loglevel(request): def loglevel(request):
orig = tractor.log._default_loglevel orig = tractor.log._default_loglevel
level = tractor.log._default_loglevel = request.config.option.loglevel level = tractor.log._default_loglevel = request.config.option.loglevel
tractor.log.get_console_log(level)
yield level yield level
tractor.log._default_loglevel = orig tractor.log._default_loglevel = orig
@pytest.fixture(scope='session')
def spawn_backend(request) -> str:
return request.config.option.spawn_backend
_ci_env: bool = os.environ.get('CI', False)
@pytest.fixture(scope='session')
def ci_env() -> bool:
"""Detect CI envoirment.
"""
return _ci_env
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def arb_addr(): def arb_addr():
return _arb_addr return _arb_addr
def pytest_generate_tests(metafunc):
spawn_backend = metafunc.config.option.spawn_backend
if not spawn_backend:
# XXX some weird windows bug with `pytest`?
spawn_backend = 'trio'
# TODO: maybe just use the literal `._spawn.SpawnMethodKey`?
assert spawn_backend in (
'mp_spawn',
'mp_forkserver',
'trio',
)
# NOTE: used to be used to dyanmically parametrize tests for when
# you just passed --spawn-backend=`mp` on the cli, but now we expect
# that cli input to be manually specified, BUT, maybe we'll do
# something like this again in the future?
if 'start_method' in metafunc.fixturenames:
metafunc.parametrize("start_method", [spawn_backend], scope='module')
def sig_prog(proc, sig):
"Kill the actor-process with ``sig``."
proc.send_signal(sig)
time.sleep(0.1)
if not proc.poll():
# TODO: why sometimes does SIGINT not work on teardown?
# seems to happen only when trace logging enabled?
proc.send_signal(_KILL_SIGNAL)
ret = proc.wait()
assert ret
@pytest.fixture
def daemon(
loglevel: str,
testdir,
arb_addr: tuple[str, int],
):
'''
Run a daemon actor as a "remote arbiter".
'''
if loglevel in ('trace', 'debug'):
# too much logging will lock up the subproc (smh)
loglevel = 'info'
cmdargs = [
sys.executable, '-c',
"import tractor; tractor.run_daemon([], registry_addr={}, loglevel={})"
.format(
arb_addr,
"'{}'".format(loglevel) if loglevel else None)
]
kwargs = dict()
if platform.system() == 'Windows':
# without this, tests hang on windows forever
kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
proc = testdir.popen(
cmdargs,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs,
)
assert not proc.returncode
time.sleep(_PROC_SPAWN_WAIT)
yield proc
sig_prog(proc, _INT_SIGNAL)

129
tests/test_2way.py 100644
View File

@ -0,0 +1,129 @@
"""
Bidirectional streaming.
"""
import pytest
import trio
import tractor
@tractor.context
async def simple_rpc(
ctx: tractor.Context,
data: int,
) -> None:
'''
Test a small ping-pong server.
'''
# signal to parent that we're up
await ctx.started(data + 1)
print('opening stream in callee')
async with ctx.open_stream() as stream:
count = 0
while True:
try:
await stream.receive() == 'ping'
except trio.EndOfChannel:
assert count == 10
break
else:
print('pong')
await stream.send('pong')
count += 1
@tractor.context
async def simple_rpc_with_forloop(
ctx: tractor.Context,
data: int,
) -> None:
"""Same as previous test but using ``async for`` syntax/api.
"""
# signal to parent that we're up
await ctx.started(data + 1)
print('opening stream in callee')
async with ctx.open_stream() as stream:
count = 0
async for msg in stream:
assert msg == 'ping'
print('pong')
await stream.send('pong')
count += 1
else:
assert count == 10
@pytest.mark.parametrize(
'use_async_for',
[True, False],
)
@pytest.mark.parametrize(
'server_func',
[simple_rpc, simple_rpc_with_forloop],
)
def test_simple_rpc(server_func, use_async_for):
'''
The simplest request response pattern.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'rpc_server',
enable_modules=[__name__],
)
async with portal.open_context(
server_func, # taken from pytest parameterization
data=10,
) as (ctx, sent):
assert sent == 11
async with ctx.open_stream() as stream:
if use_async_for:
count = 0
# receive msgs using async for style
print('ping')
await stream.send('ping')
async for msg in stream:
assert msg == 'pong'
print('ping')
await stream.send('ping')
count += 1
if count >= 9:
break
else:
# classic send/receive style
for _ in range(10):
print('ping')
await stream.send('ping')
assert await stream.receive() == 'pong'
# stream should terminate here
# final context result(s) should be consumed here in __aexit__()
await portal.cancel_actor()
trio.run(main)

View File

@ -0,0 +1,193 @@
'''
Sketchy network blackoutz, ugly byzantine gens, puedes eschuchar la
cancelacion?..
'''
from functools import partial
import pytest
from _pytest.pathlib import import_path
import trio
import tractor
from conftest import (
examples_dir,
)
@pytest.mark.parametrize(
'debug_mode',
[False, True],
ids=['no_debug_mode', 'debug_mode'],
)
@pytest.mark.parametrize(
'ipc_break',
[
# no breaks
{
'break_parent_ipc_after': False,
'break_child_ipc_after': False,
},
# only parent breaks
{
'break_parent_ipc_after': 500,
'break_child_ipc_after': False,
},
# only child breaks
{
'break_parent_ipc_after': False,
'break_child_ipc_after': 500,
},
# both: break parent first
{
'break_parent_ipc_after': 500,
'break_child_ipc_after': 800,
},
# both: break child first
{
'break_parent_ipc_after': 800,
'break_child_ipc_after': 500,
},
],
ids=[
'no_break',
'break_parent',
'break_child',
'break_both_parent_first',
'break_both_child_first',
],
)
def test_ipc_channel_break_during_stream(
debug_mode: bool,
spawn_backend: str,
ipc_break: dict | None,
):
'''
Ensure we can have an IPC channel break its connection during
streaming and it's still possible for the (simulated) user to kill
the actor tree using SIGINT.
We also verify the type of connection error expected in the parent
depending on which side if the IPC breaks first.
'''
if spawn_backend != 'trio':
if debug_mode:
pytest.skip('`debug_mode` only supported on `trio` spawner')
# non-`trio` spawners should never hit the hang condition that
# requires the user to do ctl-c to cancel the actor tree.
expect_final_exc = trio.ClosedResourceError
mod = import_path(
examples_dir() / 'advanced_faults' / 'ipc_failure_during_stream.py',
root=examples_dir(),
)
expect_final_exc = KeyboardInterrupt
# when ONLY the child breaks we expect the parent to get a closed
# resource error on the next `MsgStream.receive()` and then fail out
# and cancel the child from there.
if (
# only child breaks
(
ipc_break['break_child_ipc_after']
and ipc_break['break_parent_ipc_after'] is False
)
# both break but, parent breaks first
or (
ipc_break['break_child_ipc_after'] is not False
and (
ipc_break['break_parent_ipc_after']
> ipc_break['break_child_ipc_after']
)
)
):
expect_final_exc = trio.ClosedResourceError
# when the parent IPC side dies (even if the child's does as well
# but the child fails BEFORE the parent) we expect the channel to be
# sent a stop msg from the child at some point which will signal the
# parent that the stream has been terminated.
# NOTE: when the parent breaks "after" the child you get this same
# case as well, the child breaks the IPC channel with a stop msg
# before any closure takes place.
elif (
# only parent breaks
(
ipc_break['break_parent_ipc_after']
and ipc_break['break_child_ipc_after'] is False
)
# both break but, child breaks first
or (
ipc_break['break_parent_ipc_after'] is not False
and (
ipc_break['break_child_ipc_after']
> ipc_break['break_parent_ipc_after']
)
)
):
expect_final_exc = trio.EndOfChannel
with pytest.raises(expect_final_exc):
trio.run(
partial(
mod.main,
debug_mode=debug_mode,
start_method=spawn_backend,
**ipc_break,
)
)
@tractor.context
async def break_ipc_after_started(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream() as stream:
await stream.aclose()
await trio.sleep(0.2)
await ctx.chan.send(None)
print('child broke IPC and terminating')
def test_stream_closed_right_after_ipc_break_and_zombie_lord_engages():
'''
Verify that is a subactor's IPC goes down just after bringing up a stream
the parent can trigger a SIGINT and the child will be reaped out-of-IPC by
the localhost process supervision machinery: aka "zombie lord".
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'ipc_breaker',
enable_modules=[__name__],
)
with trio.move_on_after(1):
async with (
portal.open_context(
break_ipc_after_started
) as (ctx, sent),
):
async with ctx.open_stream():
await trio.sleep(0.5)
print('parent waiting on context')
print('parent exited context')
raise KeyboardInterrupt
with pytest.raises(KeyboardInterrupt):
trio.run(main)

View File

@ -0,0 +1,380 @@
'''
Advanced streaming patterns using bidirectional streams and contexts.
'''
from collections import Counter
import itertools
import platform
import trio
import tractor
def is_win():
return platform.system() == 'Windows'
_registry: dict[str, set[tractor.MsgStream]] = {
'even': set(),
'odd': set(),
}
async def publisher(
seed: int = 0,
) -> None:
global _registry
def is_even(i):
return i % 2 == 0
for val in itertools.count(seed):
sub = 'even' if is_even(val) else 'odd'
for sub_stream in _registry[sub].copy():
await sub_stream.send(val)
# throttle send rate to ~1kHz
# making it readable to a human user
await trio.sleep(1/1000)
@tractor.context
async def subscribe(
ctx: tractor.Context,
) -> None:
global _registry
# syn caller
await ctx.started(None)
async with ctx.open_stream() as stream:
# update subs list as consumer requests
async for new_subs in stream:
new_subs = set(new_subs)
remove = new_subs - _registry.keys()
print(f'setting sub to {new_subs} for {ctx.chan.uid}')
# remove old subs
for sub in remove:
_registry[sub].remove(stream)
# add new subs for consumer
for sub in new_subs:
_registry[sub].add(stream)
async def consumer(
subs: list[str],
) -> None:
uid = tractor.current_actor().uid
async with tractor.wait_for_actor('publisher') as portal:
async with portal.open_context(subscribe) as (ctx, first):
async with ctx.open_stream() as stream:
# flip between the provided subs dynamically
if len(subs) > 1:
for sub in itertools.cycle(subs):
print(f'setting dynamic sub to {sub}')
await stream.send([sub])
count = 0
async for value in stream:
print(f'{uid} got: {value}')
if count > 5:
break
count += 1
else: # static sub
await stream.send(subs)
async for value in stream:
print(f'{uid} got: {value}')
def test_dynamic_pub_sub():
global _registry
from multiprocessing import cpu_count
cpus = cpu_count()
async def main():
async with tractor.open_nursery() as n:
# name of this actor will be same as target func
await n.run_in_actor(publisher)
for i, sub in zip(
range(cpus - 2),
itertools.cycle(_registry.keys())
):
await n.run_in_actor(
consumer,
name=f'consumer_{sub}',
subs=[sub],
)
# make one dynamic subscriber
await n.run_in_actor(
consumer,
name='consumer_dynamic',
subs=list(_registry.keys()),
)
# block until cancelled by user
with trio.fail_after(3):
await trio.sleep_forever()
try:
trio.run(main)
except trio.TooSlowError:
pass
@tractor.context
async def one_task_streams_and_one_handles_reqresp(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream() as stream:
async def pingpong():
'''Run a simple req/response service.
'''
async for msg in stream:
print('rpc server ping')
assert msg == 'ping'
print('rpc server pong')
await stream.send('pong')
async with trio.open_nursery() as n:
n.start_soon(pingpong)
for _ in itertools.count():
await stream.send('yo')
await trio.sleep(0.01)
def test_reqresp_ontopof_streaming():
'''
Test a subactor that both streams with one task and
spawns another which handles a small requests-response
dialogue over the same bidir-stream.
'''
async def main():
# flat to make sure we get at least one pong
got_pong: bool = False
timeout: int = 2
if is_win(): # smh
timeout = 4
with trio.move_on_after(timeout):
async with tractor.open_nursery() as n:
# name of this actor will be same as target func
portal = await n.start_actor(
'dual_tasks',
enable_modules=[__name__]
)
async with portal.open_context(
one_task_streams_and_one_handles_reqresp,
) as (ctx, first):
assert first is None
async with ctx.open_stream() as stream:
await stream.send('ping')
async for msg in stream:
print(f'client received: {msg}')
assert msg in {'pong', 'yo'}
if msg == 'pong':
got_pong = True
await stream.send('ping')
print('client sent ping')
assert got_pong
try:
trio.run(main)
except trio.TooSlowError:
pass
async def async_gen_stream(sequence):
for i in sequence:
yield i
await trio.sleep(0.1)
@tractor.context
async def echo_ctx_stream(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream() as stream:
async for msg in stream:
await stream.send(msg)
def test_sigint_both_stream_types():
'''Verify that running a bi-directional and recv only stream
side-by-side will cancel correctly from SIGINT.
'''
timeout: float = 2
if is_win(): # smh
timeout += 1
async def main():
with trio.fail_after(timeout):
async with tractor.open_nursery() as n:
# name of this actor will be same as target func
portal = await n.start_actor(
'2_way',
enable_modules=[__name__]
)
async with portal.open_context(echo_ctx_stream) as (ctx, _):
async with ctx.open_stream() as stream:
async with portal.open_stream_from(
async_gen_stream,
sequence=list(range(1)),
) as gen_stream:
msg = await gen_stream.receive()
await stream.send(msg)
resp = await stream.receive()
assert resp == msg
raise KeyboardInterrupt
try:
trio.run(main)
assert 0, "Didn't receive KBI!?"
except KeyboardInterrupt:
pass
@tractor.context
async def inf_streamer(
ctx: tractor.Context,
) -> None:
'''
Stream increasing ints until terminated with a 'done' msg.
'''
await ctx.started()
async with (
ctx.open_stream() as stream,
trio.open_nursery() as n,
):
async def bail_on_sentinel():
async for msg in stream:
if msg == 'done':
await stream.aclose()
else:
print(f'streamer received {msg}')
# start termination detector
n.start_soon(bail_on_sentinel)
for val in itertools.count():
try:
await stream.send(val)
except trio.ClosedResourceError:
# close out the stream gracefully
break
print('terminating streamer')
def test_local_task_fanout_from_stream():
'''
Single stream with multiple local consumer tasks using the
``MsgStream.subscribe()` api.
Ensure all tasks receive all values after stream completes sending.
'''
consumers = 22
async def main():
counts = Counter()
async with tractor.open_nursery() as tn:
p = await tn.start_actor(
'inf_streamer',
enable_modules=[__name__],
)
async with (
p.open_context(inf_streamer) as (ctx, _),
ctx.open_stream() as stream,
):
async def pull_and_count(name: str):
# name = trio.lowlevel.current_task().name
async with stream.subscribe() as recver:
assert isinstance(
recver,
tractor.trionics.BroadcastReceiver
)
async for val in recver:
# print(f'{name}: {val}')
counts[name] += 1
print(f'{name} bcaster ended')
print(f'{name} completed')
with trio.fail_after(3):
async with trio.open_nursery() as nurse:
for i in range(consumers):
nurse.start_soon(pull_and_count, i)
await trio.sleep(0.5)
print('\nterminating')
await stream.send('done')
print('closed stream connection')
assert len(counts) == consumers
mx = max(counts.values())
# make sure each task received all stream values
assert all(val == mx for val in counts.values())
await p.cancel_actor()
trio.run(main)

View File

@ -1,19 +1,42 @@
""" """
Cancellation and error propagation Cancellation and error propagation
""" """
import os
import signal
import platform
import time
from itertools import repeat from itertools import repeat
from exceptiongroup import (
BaseExceptionGroup,
ExceptionGroup,
)
import pytest import pytest
import trio import trio
import tractor import tractor
from conftest import tractor_test from conftest import tractor_test, no_windows
async def assert_err(): def is_win():
return platform.system() == 'Windows'
async def assert_err(delay=0):
await trio.sleep(delay)
assert 0 assert 0
async def sleep_forever():
await trio.sleep_forever()
async def do_nuthin():
# just nick the scheduler
await trio.sleep(0)
@pytest.mark.parametrize( @pytest.mark.parametrize(
'args_err', 'args_err',
[ [
@ -33,34 +56,60 @@ def test_remote_error(arb_addr, args_err):
args, errtype = args_err args, errtype = args_err
async def main(): async def main():
async with tractor.open_nursery() as nursery: async with tractor.open_nursery(
arbiter_addr=arb_addr,
) as nursery:
portal = await nursery.run_in_actor('errorer', assert_err, **args) # on a remote type error caused by bad input args
# this should raise directly which means we **don't** get
# an exception group outside the nursery since the error
# here and the far end task error are one in the same?
portal = await nursery.run_in_actor(
assert_err, name='errorer', **args
)
# get result(s) from main task # get result(s) from main task
try: try:
# this means the root actor will also raise a local
# parent task error and thus an eg will propagate out
# of this actor nursery.
await portal.result() await portal.result()
except tractor.RemoteActorError as err: except tractor.RemoteActorError as err:
assert err.type == errtype assert err.type == errtype
print("Look Maa that actor failed hard, hehh") print("Look Maa that actor failed hard, hehh")
raise raise
with pytest.raises(tractor.RemoteActorError) as excinfo: # ensure boxed errors
tractor.run(main, arbiter_addr=arb_addr) if args:
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
# ensure boxed error is correct assert excinfo.value.type == errtype
assert excinfo.value.type == errtype
else:
# the root task will also error on the `.result()` call
# so we expect an error from there AND the child.
with pytest.raises(BaseExceptionGroup) as excinfo:
trio.run(main)
# ensure boxed errors
for exc in excinfo.value.exceptions:
assert exc.type == errtype
def test_multierror(arb_addr): def test_multierror(arb_addr):
"""Verify we raise a ``trio.MultiError`` out of a nursery where '''
Verify we raise a ``BaseExceptionGroup`` out of a nursery where
more then one actor errors. more then one actor errors.
"""
async def main():
async with tractor.open_nursery() as nursery:
await nursery.run_in_actor('errorer1', assert_err) '''
portal2 = await nursery.run_in_actor('errorer2', assert_err) async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr,
) as nursery:
await nursery.run_in_actor(assert_err, name='errorer1')
portal2 = await nursery.run_in_actor(assert_err, name='errorer2')
# get result(s) from main task # get result(s) from main task
try: try:
@ -70,35 +119,89 @@ def test_multierror(arb_addr):
print("Look Maa that first actor failed hard, hehh") print("Look Maa that first actor failed hard, hehh")
raise raise
# here we should get a `trio.MultiError` containing exceptions # here we should get a ``BaseExceptionGroup`` containing exceptions
# from both subactors # from both subactors
with pytest.raises(trio.MultiError): with pytest.raises(BaseExceptionGroup):
tractor.run(main, arbiter_addr=arb_addr) trio.run(main)
def do_nothing(): @pytest.mark.parametrize('delay', (0, 0.5))
@pytest.mark.parametrize(
'num_subactors', range(25, 26),
)
def test_multierror_fast_nursery(arb_addr, start_method, num_subactors, delay):
"""Verify we raise a ``BaseExceptionGroup`` out of a nursery where
more then one actor errors and also with a delay before failure
to test failure during an ongoing spawning.
"""
async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr,
) as nursery:
for i in range(num_subactors):
await nursery.run_in_actor(
assert_err,
name=f'errorer{i}',
delay=delay
)
# with pytest.raises(trio.MultiError) as exc_info:
with pytest.raises(BaseExceptionGroup) as exc_info:
trio.run(main)
assert exc_info.type == ExceptionGroup
err = exc_info.value
exceptions = err.exceptions
if len(exceptions) == 2:
# sometimes oddly now there's an embedded BrokenResourceError ?
for exc in exceptions:
excs = getattr(exc, 'exceptions', None)
if excs:
exceptions = excs
break
assert len(exceptions) == num_subactors
for exc in exceptions:
assert isinstance(exc, tractor.RemoteActorError)
assert exc.type == AssertionError
async def do_nothing():
pass pass
def test_cancel_single_subactor(arb_addr): @pytest.mark.parametrize('mechanism', ['nursery_cancel', KeyboardInterrupt])
def test_cancel_single_subactor(arb_addr, mechanism):
"""Ensure a ``ActorNursery.start_actor()`` spawned subactor """Ensure a ``ActorNursery.start_actor()`` spawned subactor
cancels when the nursery is cancelled. cancels when the nursery is cancelled.
""" """
async def spawn_actor(): async def spawn_actor():
"""Spawn an actor that blocks indefinitely. """Spawn an actor that blocks indefinitely.
""" """
async with tractor.open_nursery() as nursery: async with tractor.open_nursery(
arbiter_addr=arb_addr,
) as nursery:
portal = await nursery.start_actor( portal = await nursery.start_actor(
'nothin', rpc_module_paths=[__name__], 'nothin', enable_modules=[__name__],
) )
assert (await portal.run(__name__, 'do_nothing')) is None assert (await portal.run(do_nothing)) is None
# would hang otherwise if mechanism == 'nursery_cancel':
await nursery.cancel() # would hang otherwise
await nursery.cancel()
else:
raise mechanism
tractor.run(spawn_actor, arbiter_addr=arb_addr) if mechanism == 'nursery_cancel':
trio.run(spawn_actor)
else:
with pytest.raises(mechanism):
trio.run(spawn_actor)
async def stream_forever(): async def stream_forever():
@ -110,20 +213,21 @@ async def stream_forever():
@tractor_test @tractor_test
async def test_cancel_infinite_streamer(): async def test_cancel_infinite_streamer(start_method):
# stream for at most 1 seconds # stream for at most 1 seconds
with trio.move_on_after(1) as cancel_scope: with trio.move_on_after(1) as cancel_scope:
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
portal = await n.start_actor( portal = await n.start_actor(
f'donny', 'donny',
rpc_module_paths=[__name__], enable_modules=[__name__],
) )
# this async for loop streams values from the above # this async for loop streams values from the above
# async generator running in a separate process # async generator running in a separate process
async for letter in await portal.run(__name__, 'stream_forever'): async with portal.open_stream_from(stream_forever) as stream:
print(letter) async for letter in stream:
print(letter)
# we support trio's cancellation system # we support trio's cancellation system
assert cancel_scope.cancelled_caught assert cancel_scope.cancelled_caught
@ -133,37 +237,89 @@ async def test_cancel_infinite_streamer():
@pytest.mark.parametrize( @pytest.mark.parametrize(
'num_actors_and_errs', 'num_actors_and_errs',
[ [
(1, tractor.RemoteActorError, AssertionError), # daemon actors sit idle while single task actors error out
(2, tractor.MultiError, AssertionError) (1, tractor.RemoteActorError, AssertionError, (assert_err, {}), None),
(2, BaseExceptionGroup, AssertionError, (assert_err, {}), None),
(3, BaseExceptionGroup, AssertionError, (assert_err, {}), None),
# 1 daemon actor errors out while single task actors sleep forever
(3, tractor.RemoteActorError, AssertionError, (sleep_forever, {}),
(assert_err, {}, True)),
# daemon actors error out after brief delay while single task
# actors complete quickly
(3, tractor.RemoteActorError, AssertionError,
(do_nuthin, {}), (assert_err, {'delay': 1}, True)),
# daemon complete quickly delay while single task
# actors error after brief delay
(3, BaseExceptionGroup, AssertionError,
(assert_err, {'delay': 1}), (do_nuthin, {}, False)),
],
ids=[
'1_run_in_actor_fails',
'2_run_in_actors_fail',
'3_run_in_actors_fail',
'1_daemon_actors_fail',
'1_daemon_actors_fail_all_run_in_actors_dun_quick',
'no_daemon_actors_fail_all_run_in_actors_sleep_then_fail',
], ],
ids=['one_actor', 'two_actors'],
) )
@tractor_test @tractor_test
async def test_some_cancels_all(num_actors_and_errs): async def test_some_cancels_all(num_actors_and_errs, start_method, loglevel):
"""Verify a subset of failed subactors causes all others in """Verify a subset of failed subactors causes all others in
the nursery to be cancelled just like the strategy in trio. the nursery to be cancelled just like the strategy in trio.
This is the first and only supervisory strategy at the moment. This is the first and only supervisory strategy at the moment.
""" """
num, first_err, err_type = num_actors_and_errs num_actors, first_err, err_type, ria_func, da_func = num_actors_and_errs
try: try:
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
real_actors = []
for i in range(3): # spawn the same number of deamon actors which should be cancelled
real_actors.append(await n.start_actor( dactor_portals = []
f'actor_{i}', for i in range(num_actors):
rpc_module_paths=[__name__], dactor_portals.append(await n.start_actor(
f'deamon_{i}',
enable_modules=[__name__],
)) ))
for i in range(num): func, kwargs = ria_func
riactor_portals = []
for i in range(num_actors):
# start actor(s) that will fail immediately # start actor(s) that will fail immediately
await n.run_in_actor(f'extra_{i}', assert_err) riactor_portals.append(
await n.run_in_actor(
func,
name=f'actor_{i}',
**kwargs
)
)
if da_func:
func, kwargs, expect_error = da_func
for portal in dactor_portals:
# if this function fails then we should error here
# and the nursery should teardown all other actors
try:
await portal.run(func, **kwargs)
except tractor.RemoteActorError as err:
assert err.type == err_type
# we only expect this first error to propogate
# (all other daemons are cancelled before they
# can be scheduled)
num_actors = 1
# reraise so nursery teardown is triggered
raise
else:
if expect_error:
pytest.fail(
"Deamon call should fail at checkpoint?")
# should error here with a ``RemoteActorError`` or ``MultiError`` # should error here with a ``RemoteActorError`` or ``MultiError``
except first_err as err: except first_err as err:
if isinstance(err, tractor.MultiError): if isinstance(err, BaseExceptionGroup):
assert len(err.exceptions) == num assert len(err.exceptions) == num_actors
for exc in err.exceptions: for exc in err.exceptions:
if isinstance(exc, tractor.RemoteActorError): if isinstance(exc, tractor.RemoteActorError):
assert exc.type == err_type assert exc.type == err_type
@ -176,3 +332,270 @@ async def test_some_cancels_all(num_actors_and_errs):
assert not n._children assert not n._children
else: else:
pytest.fail("Should have gotten a remote assertion error?") pytest.fail("Should have gotten a remote assertion error?")
async def spawn_and_error(breadth, depth) -> None:
name = tractor.current_actor().name
async with tractor.open_nursery() as nursery:
for i in range(breadth):
if depth > 0:
args = (
spawn_and_error,
)
kwargs = {
'name': f'spawner_{i}_depth_{depth}',
'breadth': breadth,
'depth': depth - 1,
}
else:
args = (
assert_err,
)
kwargs = {
'name': f'{name}_errorer_{i}',
}
await nursery.run_in_actor(*args, **kwargs)
@tractor_test
async def test_nested_multierrors(loglevel, start_method):
'''
Test that failed actor sets are wrapped in `BaseExceptionGroup`s. This
test goes only 2 nurseries deep but we should eventually have tests
for arbitrary n-depth actor trees.
'''
if start_method == 'trio':
depth = 3
subactor_breadth = 2
else:
# XXX: multiprocessing can't seem to handle any more then 2 depth
# process trees for whatever reason.
# Any more process levels then this and we see bugs that cause
# hangs and broken pipes all over the place...
if start_method == 'forkserver':
pytest.skip("Forksever sux hard at nested spawning...")
depth = 1 # means an additional actor tree of spawning (2 levels deep)
subactor_breadth = 2
with trio.fail_after(120):
try:
async with tractor.open_nursery() as nursery:
for i in range(subactor_breadth):
await nursery.run_in_actor(
spawn_and_error,
name=f'spawner_{i}',
breadth=subactor_breadth,
depth=depth,
)
except BaseExceptionGroup as err:
assert len(err.exceptions) == subactor_breadth
for subexc in err.exceptions:
# verify first level actor errors are wrapped as remote
if is_win():
# windows is often too slow and cancellation seems
# to happen before an actor is spawned
if isinstance(subexc, trio.Cancelled):
continue
elif isinstance(subexc, tractor.RemoteActorError):
# on windows it seems we can't exactly be sure wtf
# will happen..
assert subexc.type in (
tractor.RemoteActorError,
trio.Cancelled,
BaseExceptionGroup,
)
elif isinstance(subexc, BaseExceptionGroup):
for subsub in subexc.exceptions:
if subsub in (tractor.RemoteActorError,):
subsub = subsub.type
assert type(subsub) in (
trio.Cancelled,
BaseExceptionGroup,
)
else:
assert isinstance(subexc, tractor.RemoteActorError)
if depth > 0 and subactor_breadth > 1:
# XXX not sure what's up with this..
# on windows sometimes spawning is just too slow and
# we get back the (sent) cancel signal instead
if is_win():
if isinstance(subexc, tractor.RemoteActorError):
assert subexc.type in (
BaseExceptionGroup,
tractor.RemoteActorError
)
else:
assert isinstance(subexc, BaseExceptionGroup)
else:
assert subexc.type is ExceptionGroup
else:
assert subexc.type in (
tractor.RemoteActorError,
trio.Cancelled
)
@no_windows
def test_cancel_via_SIGINT(
loglevel,
start_method,
spawn_backend,
):
"""Ensure that a control-C (SIGINT) signal cancels both the parent and
child processes in trionic fashion
"""
pid = os.getpid()
async def main():
with trio.fail_after(2):
async with tractor.open_nursery() as tn:
await tn.start_actor('sucka')
if 'mp' in spawn_backend:
time.sleep(0.1)
os.kill(pid, signal.SIGINT)
await trio.sleep_forever()
with pytest.raises(KeyboardInterrupt):
trio.run(main)
@no_windows
def test_cancel_via_SIGINT_other_task(
loglevel,
start_method,
spawn_backend,
):
"""Ensure that a control-C (SIGINT) signal cancels both the parent
and child processes in trionic fashion even a subprocess is started
from a seperate ``trio`` child task.
"""
pid = os.getpid()
timeout: float = 2
if is_win(): # smh
timeout += 1
async def spawn_and_sleep_forever(task_status=trio.TASK_STATUS_IGNORED):
async with tractor.open_nursery() as tn:
for i in range(3):
await tn.run_in_actor(
sleep_forever,
name='namesucka',
)
task_status.started()
await trio.sleep_forever()
async def main():
# should never timeout since SIGINT should cancel the current program
with trio.fail_after(timeout):
async with trio.open_nursery() as n:
await n.start(spawn_and_sleep_forever)
if 'mp' in spawn_backend:
time.sleep(0.1)
os.kill(pid, signal.SIGINT)
with pytest.raises(KeyboardInterrupt):
trio.run(main)
async def spin_for(period=3):
"Sync sleep."
time.sleep(period)
async def spawn():
async with tractor.open_nursery() as tn:
await tn.run_in_actor(
spin_for,
name='sleeper',
)
@no_windows
def test_cancel_while_childs_child_in_sync_sleep(
loglevel,
start_method,
spawn_backend,
):
"""Verify that a child cancelled while executing sync code is torn
down even when that cancellation is triggered by the parent
2 nurseries "up".
"""
if start_method == 'forkserver':
pytest.skip("Forksever sux hard at resuming from sync sleep...")
async def main():
with trio.fail_after(2):
async with tractor.open_nursery() as tn:
await tn.run_in_actor(
spawn,
name='spawn',
)
await trio.sleep(1)
assert 0
with pytest.raises(AssertionError):
trio.run(main)
def test_fast_graceful_cancel_when_spawn_task_in_soft_proc_wait_for_daemon(
start_method,
):
'''
This is a very subtle test which demonstrates how cancellation
during process collection can result in non-optimal teardown
performance on daemon actors. The fix for this test was to handle
``trio.Cancelled`` specially in the spawn task waiting in
`proc.wait()` such that ``Portal.cancel_actor()`` is called before
executing the "hard reap" sequence (which has an up to 3 second
delay currently).
In other words, if we can cancel the actor using a graceful remote
cancellation, and it's faster, we might as well do it.
'''
kbi_delay = 0.5
timeout: float = 2.9
if is_win(): # smh
timeout += 1
async def main():
start = time.time()
try:
async with trio.open_nursery() as nurse:
async with tractor.open_nursery() as tn:
p = await tn.start_actor(
'fast_boi',
enable_modules=[__name__],
)
async def delayed_kbi():
await trio.sleep(kbi_delay)
print(f'RAISING KBI after {kbi_delay} s')
raise KeyboardInterrupt
# start task which raises a kbi **after**
# the actor nursery ``__aexit__()`` has
# been run.
nurse.start_soon(delayed_kbi)
await p.run(do_nuthin)
finally:
duration = time.time() - start
if duration > timeout:
raise trio.TooSlowError(
'daemon cancel was slower then necessary..'
)
with pytest.raises(KeyboardInterrupt):
trio.run(main)

View File

@ -0,0 +1,173 @@
'''
Test a service style daemon that maintains a nursery for spawning
"remote async tasks" including both spawning other long living
sub-sub-actor daemons.
'''
from typing import Optional
import asyncio
from contextlib import asynccontextmanager as acm
import pytest
import trio
from trio_typing import TaskStatus
import tractor
from tractor import RemoteActorError
from async_generator import aclosing
async def aio_streamer(
from_trio: asyncio.Queue,
to_trio: trio.abc.SendChannel,
) -> trio.abc.ReceiveChannel:
# required first msg to sync caller
to_trio.send_nowait(None)
from itertools import cycle
for i in cycle(range(10)):
to_trio.send_nowait(i)
await asyncio.sleep(0.01)
async def trio_streamer():
from itertools import cycle
for i in cycle(range(10)):
yield i
await trio.sleep(0.01)
async def trio_sleep_and_err(delay: float = 0.5):
await trio.sleep(delay)
# name error
doggy() # noqa
_cached_stream: Optional[
trio.abc.ReceiveChannel
] = None
@acm
async def wrapper_mngr(
):
from tractor.trionics import broadcast_receiver
global _cached_stream
in_aio = tractor.current_actor().is_infected_aio()
if in_aio:
if _cached_stream:
from_aio = _cached_stream
# if we already have a cached feed deliver a rx side clone
# to consumer
async with broadcast_receiver(from_aio, 6) as from_aio:
yield from_aio
return
else:
async with tractor.to_asyncio.open_channel_from(
aio_streamer,
) as (first, from_aio):
assert not first
# cache it so next task uses broadcast receiver
_cached_stream = from_aio
yield from_aio
else:
async with aclosing(trio_streamer()) as stream:
# cache it so next task uses broadcast receiver
_cached_stream = stream
yield stream
_nursery: trio.Nursery = None
@tractor.context
async def trio_main(
ctx: tractor.Context,
):
# sync
await ctx.started()
# stash a "service nursery" as "actor local" (aka a Python global)
global _nursery
n = _nursery
assert n
async def consume_stream():
async with wrapper_mngr() as stream:
async for msg in stream:
print(msg)
# run 2 tasks to ensure broadcaster chan use
n.start_soon(consume_stream)
n.start_soon(consume_stream)
n.start_soon(trio_sleep_and_err)
await trio.sleep_forever()
@tractor.context
async def open_actor_local_nursery(
ctx: tractor.Context,
):
global _nursery
async with trio.open_nursery() as n:
_nursery = n
await ctx.started()
await trio.sleep(10)
# await trio.sleep(1)
# XXX: this causes the hang since
# the caller does not unblock from its own
# ``trio.sleep_forever()``.
# TODO: we need to test a simple ctx task starting remote tasks
# that error and then blocking on a ``Nursery.start()`` which
# never yields back.. aka a scenario where the
# ``tractor.context`` task IS NOT in the service n's cancel
# scope.
n.cancel_scope.cancel()
@pytest.mark.parametrize(
'asyncio_mode',
[True, False],
ids='asyncio_mode={}'.format,
)
def test_actor_managed_trio_nursery_task_error_cancels_aio(
asyncio_mode: bool,
arb_addr
):
'''
Verify that a ``trio`` nursery created managed in a child actor
correctly relays errors to the parent actor when one of its spawned
tasks errors even when running in infected asyncio mode and using
broadcast receivers for multi-task-per-actor subscription.
'''
async def main():
# cancel the nursery shortly after boot
async with tractor.open_nursery() as n:
p = await n.start_actor(
'nursery_mngr',
infect_asyncio=asyncio_mode,
enable_modules=[__name__],
)
async with (
p.open_context(open_actor_local_nursery) as (ctx, first),
p.open_context(trio_main) as (ctx, first),
):
await trio.sleep_forever()
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
# verify boxed error
err = excinfo.value
assert isinstance(err.type(), NameError)

View File

@ -0,0 +1,84 @@
import itertools
import pytest
import trio
import tractor
from tractor import open_actor_cluster
from tractor.trionics import gather_contexts
from conftest import tractor_test
MESSAGE = 'tractoring at full speed'
def test_empty_mngrs_input_raises() -> None:
async def main():
with trio.fail_after(1):
async with (
open_actor_cluster(
modules=[__name__],
# NOTE: ensure we can passthrough runtime opts
loglevel='info',
# debug_mode=True,
) as portals,
gather_contexts(
# NOTE: it's the use of inline-generator syntax
# here that causes the empty input.
mngrs=(
p.open_context(worker) for p in portals.values()
),
),
):
assert 0
with pytest.raises(ValueError):
trio.run(main)
@tractor.context
async def worker(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream(
backpressure=True,
) as stream:
# TODO: this with the below assert causes a hang bug?
# with trio.move_on_after(1):
async for msg in stream:
# do something with msg
print(msg)
assert msg == MESSAGE
# TODO: does this ever cause a hang
# assert 0
@tractor_test
async def test_streaming_to_actor_cluster() -> None:
async with (
open_actor_cluster(modules=[__name__]) as portals,
gather_contexts(
mngrs=[p.open_context(worker) for p in portals.values()],
) as contexts,
gather_contexts(
mngrs=[ctx[0].open_stream() for ctx in contexts],
) as streams,
):
with trio.move_on_after(1):
for stream in itertools.cycle(streams):
await stream.send(MESSAGE)

View File

@ -0,0 +1,798 @@
'''
``async with ():`` inlined context-stream cancellation testing.
Verify the we raise errors when streams are opened prior to sync-opening
a ``tractor.Context`` beforehand.
'''
from contextlib import asynccontextmanager as acm
from itertools import count
import platform
from typing import Optional
import pytest
import trio
import tractor
from tractor._exceptions import StreamOverrun
from conftest import tractor_test
# ``Context`` semantics are as follows,
# ------------------------------------
# - standard setup/teardown:
# ``Portal.open_context()`` starts a new
# remote task context in another actor. The target actor's task must
# call ``Context.started()`` to unblock this entry on the caller side.
# the callee task executes until complete and returns a final value
# which is delivered to the caller side and retreived via
# ``Context.result()``.
# - cancel termination:
# context can be cancelled on either side where either end's task can
# call ``Context.cancel()`` which raises a local ``trio.Cancelled``
# and sends a task cancel request to the remote task which in turn
# raises a ``trio.Cancelled`` in that scope, catches it, and re-raises
# as ``ContextCancelled``. This is then caught by
# ``Portal.open_context()``'s exit and we get a graceful termination
# of the linked tasks.
# - error termination:
# error is caught after all context-cancel-scope tasks are cancelled
# via regular ``trio`` cancel scope semantics, error is sent to other
# side and unpacked as a `RemoteActorError`.
# ``Context.open_stream() as stream: MsgStream:`` msg semantics are:
# -----------------------------------------------------------------
# - either side can ``.send()`` which emits a 'yield' msgs and delivers
# a value to the a ``MsgStream.receive()`` call.
# - stream closure: one end relays a 'stop' message which terminates an
# ongoing ``MsgStream`` iteration.
# - cancel/error termination: as per the context semantics above but
# with implicit stream closure on the cancelling end.
_state: bool = False
@tractor.context
async def too_many_starteds(
ctx: tractor.Context,
) -> None:
'''
Call ``Context.started()`` more then once (an error).
'''
await ctx.started()
try:
await ctx.started()
except RuntimeError:
raise
@tractor.context
async def not_started_but_stream_opened(
ctx: tractor.Context,
) -> None:
'''
Enter ``Context.open_stream()`` without calling ``.started()``.
'''
try:
async with ctx.open_stream():
assert 0
except RuntimeError:
raise
@pytest.mark.parametrize(
'target',
[too_many_starteds, not_started_but_stream_opened],
ids='misuse_type={}'.format,
)
def test_started_misuse(target):
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
target.__name__,
enable_modules=[__name__],
)
async with portal.open_context(target) as (ctx, sent):
await trio.sleep(1)
with pytest.raises(tractor.RemoteActorError):
trio.run(main)
@tractor.context
async def simple_setup_teardown(
ctx: tractor.Context,
data: int,
block_forever: bool = False,
) -> None:
# startup phase
global _state
_state = True
# signal to parent that we're up
await ctx.started(data + 1)
try:
if block_forever:
# block until cancelled
await trio.sleep_forever()
else:
return 'yo'
finally:
_state = False
async def assert_state(value: bool):
global _state
assert _state == value
@pytest.mark.parametrize(
'error_parent',
[False, ValueError, KeyboardInterrupt],
)
@pytest.mark.parametrize(
'callee_blocks_forever',
[False, True],
ids=lambda item: f'callee_blocks_forever={item}'
)
@pytest.mark.parametrize(
'pointlessly_open_stream',
[False, True],
ids=lambda item: f'open_stream={item}'
)
def test_simple_context(
error_parent,
callee_blocks_forever,
pointlessly_open_stream,
):
timeout = 1.5 if not platform.system() == 'Windows' else 4
async def main():
with trio.fail_after(timeout):
async with tractor.open_nursery() as nursery:
portal = await nursery.start_actor(
'simple_context',
enable_modules=[__name__],
)
try:
async with portal.open_context(
simple_setup_teardown,
data=10,
block_forever=callee_blocks_forever,
) as (ctx, sent):
assert sent == 11
if callee_blocks_forever:
await portal.run(assert_state, value=True)
else:
assert await ctx.result() == 'yo'
if not error_parent:
await ctx.cancel()
if pointlessly_open_stream:
async with ctx.open_stream():
if error_parent:
raise error_parent
if callee_blocks_forever:
await ctx.cancel()
else:
# in this case the stream will send a
# 'stop' msg to the far end which needs
# to be ignored
pass
else:
if error_parent:
raise error_parent
finally:
# after cancellation
if not error_parent:
await portal.run(assert_state, value=False)
# shut down daemon
await portal.cancel_actor()
if error_parent:
try:
trio.run(main)
except error_parent:
pass
except trio.MultiError as me:
# XXX: on windows it seems we may have to expect the group error
from tractor._exceptions import is_multi_cancelled
assert is_multi_cancelled(me)
else:
trio.run(main)
# basic stream terminations:
# - callee context closes without using stream
# - caller context closes without using stream
# - caller context calls `Context.cancel()` while streaming
# is ongoing resulting in callee being cancelled
# - callee calls `Context.cancel()` while streaming and caller
# sees stream terminated in `RemoteActorError`
# TODO: future possible features
# - restart request: far end raises `ContextRestart`
@tractor.context
async def close_ctx_immediately(
ctx: tractor.Context,
) -> None:
await ctx.started()
global _state
async with ctx.open_stream():
pass
@tractor_test
async def test_callee_closes_ctx_after_stream_open():
'callee context closes without using stream'
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'fast_stream_closer',
enable_modules=[__name__],
)
with trio.fail_after(2):
async with portal.open_context(
close_ctx_immediately,
# flag to avoid waiting the final result
# cancel_on_exit=True,
) as (ctx, sent):
assert sent is None
with trio.fail_after(0.5):
async with ctx.open_stream() as stream:
# should fall through since ``StopAsyncIteration``
# should be raised through translation of
# a ``trio.EndOfChannel`` by
# ``trio.abc.ReceiveChannel.__anext__()``
async for _ in stream:
assert 0
else:
# verify stream is now closed
try:
await stream.receive()
except trio.EndOfChannel:
pass
# TODO: should be just raise the closed resource err
# directly here to enforce not allowing a re-open
# of a stream to the context (at least until a time of
# if/when we decide that's a good idea?)
try:
with trio.fail_after(0.5):
async with ctx.open_stream() as stream:
pass
except trio.ClosedResourceError:
pass
await portal.cancel_actor()
@tractor.context
async def expect_cancelled(
ctx: tractor.Context,
) -> None:
global _state
_state = True
await ctx.started()
try:
async with ctx.open_stream() as stream:
async for msg in stream:
await stream.send(msg) # echo server
except trio.Cancelled:
# expected case
_state = False
raise
else:
assert 0, "Wasn't cancelled!?"
@pytest.mark.parametrize(
'use_ctx_cancel_method',
[False, True],
)
@tractor_test
async def test_caller_closes_ctx_after_callee_opens_stream(
use_ctx_cancel_method: bool,
):
'caller context closes without using stream'
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'ctx_cancelled',
enable_modules=[__name__],
)
async with portal.open_context(
expect_cancelled,
) as (ctx, sent):
await portal.run(assert_state, value=True)
assert sent is None
# call cancel explicitly
if use_ctx_cancel_method:
await ctx.cancel()
try:
async with ctx.open_stream() as stream:
async for msg in stream:
pass
except tractor.ContextCancelled:
raise # XXX: must be propagated to __aexit__
else:
assert 0, "Should have context cancelled?"
# channel should still be up
assert portal.channel.connected()
# ctx is closed here
await portal.run(assert_state, value=False)
else:
try:
with trio.fail_after(0.2):
await ctx.result()
assert 0, "Callee should have blocked!?"
except trio.TooSlowError:
await ctx.cancel()
try:
async with ctx.open_stream() as stream:
async for msg in stream:
pass
except tractor.ContextCancelled:
pass
else:
assert 0, "Should have received closed resource error?"
# ctx is closed here
await portal.run(assert_state, value=False)
# channel should not have been destroyed yet, only the
# inter-actor-task context
assert portal.channel.connected()
# teardown the actor
await portal.cancel_actor()
@tractor_test
async def test_multitask_caller_cancels_from_nonroot_task():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'ctx_cancelled',
enable_modules=[__name__],
)
async with portal.open_context(
expect_cancelled,
) as (ctx, sent):
await portal.run(assert_state, value=True)
assert sent is None
async with ctx.open_stream() as stream:
async def send_msg_then_cancel():
await stream.send('yo')
await portal.run(assert_state, value=True)
await ctx.cancel()
await portal.run(assert_state, value=False)
async with trio.open_nursery() as n:
n.start_soon(send_msg_then_cancel)
try:
async for msg in stream:
assert msg == 'yo'
except tractor.ContextCancelled:
raise # XXX: must be propagated to __aexit__
# channel should still be up
assert portal.channel.connected()
# ctx is closed here
await portal.run(assert_state, value=False)
# channel should not have been destroyed yet, only the
# inter-actor-task context
assert portal.channel.connected()
# teardown the actor
await portal.cancel_actor()
@tractor.context
async def cancel_self(
ctx: tractor.Context,
) -> None:
global _state
_state = True
await ctx.cancel()
# should inline raise immediately
try:
async with ctx.open_stream():
pass
except tractor.ContextCancelled:
# suppress for now so we can do checkpoint tests below
pass
else:
raise RuntimeError('Context didnt cancel itself?!')
# check a real ``trio.Cancelled`` is raised on a checkpoint
try:
with trio.fail_after(0.1):
await trio.sleep_forever()
except trio.Cancelled:
raise
except trio.TooSlowError:
# should never get here
assert 0
@tractor_test
async def test_callee_cancels_before_started():
'''
Callee calls `Context.cancel()` while streaming and caller
sees stream terminated in `ContextCancelled`.
'''
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'cancels_self',
enable_modules=[__name__],
)
try:
async with portal.open_context(
cancel_self,
) as (ctx, sent):
async with ctx.open_stream():
await trio.sleep_forever()
# raises a special cancel signal
except tractor.ContextCancelled as ce:
ce.type == trio.Cancelled
# the traceback should be informative
assert 'cancelled itself' in ce.msgdata['tb_str']
# teardown the actor
await portal.cancel_actor()
@tractor.context
async def never_open_stream(
ctx: tractor.Context,
) -> None:
'''
Context which never opens a stream and blocks.
'''
await ctx.started()
await trio.sleep_forever()
@tractor.context
async def keep_sending_from_callee(
ctx: tractor.Context,
msg_buffer_size: Optional[int] = None,
) -> None:
'''
Send endlessly on the calleee stream.
'''
await ctx.started()
async with ctx.open_stream(
msg_buffer_size=msg_buffer_size,
) as stream:
for msg in count():
print(f'callee sending {msg}')
await stream.send(msg)
await trio.sleep(0.01)
@pytest.mark.parametrize(
'overrun_by',
[
('caller', 1, never_open_stream),
('cancel_caller_during_overrun', 1, never_open_stream),
('callee', 0, keep_sending_from_callee),
],
ids='overrun_condition={}'.format,
)
def test_one_end_stream_not_opened(overrun_by):
'''
This should exemplify the bug from:
https://github.com/goodboy/tractor/issues/265
'''
overrunner, buf_size_increase, entrypoint = overrun_by
from tractor._runtime import Actor
buf_size = buf_size_increase + Actor.msg_buffer_size
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
entrypoint.__name__,
enable_modules=[__name__],
)
async with portal.open_context(
entrypoint,
) as (ctx, sent):
assert sent is None
if 'caller' in overrunner:
async with ctx.open_stream() as stream:
for i in range(buf_size):
print(f'sending {i}')
await stream.send(i)
if 'cancel' in overrunner:
# without this we block waiting on the child side
await ctx.cancel()
else:
# expect overrun error to be relayed back
# and this sleep interrupted
await trio.sleep_forever()
else:
# callee overruns caller case so we do nothing here
await trio.sleep_forever()
await portal.cancel_actor()
# 2 overrun cases and the no overrun case (which pushes right up to
# the msg limit)
if overrunner == 'caller' or 'cance' in overrunner:
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
assert excinfo.value.type == StreamOverrun
elif overrunner == 'callee':
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
# TODO: embedded remote errors so that we can verify the source
# error? the callee delivers an error which is an overrun
# wrapped in a remote actor error.
assert excinfo.value.type == tractor.RemoteActorError
else:
trio.run(main)
@tractor.context
async def echo_back_sequence(
ctx: tractor.Context,
seq: list[int],
msg_buffer_size: Optional[int] = None,
) -> None:
'''
Send endlessly on the calleee stream.
'''
await ctx.started()
async with ctx.open_stream(
msg_buffer_size=msg_buffer_size,
) as stream:
seq = list(seq) # bleh, `msgpack`...
count = 0
while count < 3:
batch = []
async for msg in stream:
batch.append(msg)
if batch == seq:
break
for msg in batch:
print(f'callee sending {msg}')
await stream.send(msg)
count += 1
return 'yo'
def test_stream_backpressure():
'''
Demonstrate small overruns of each task back and forth
on a stream not raising any errors by default.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'callee_sends_forever',
enable_modules=[__name__],
)
seq = list(range(3))
async with portal.open_context(
echo_back_sequence,
seq=seq,
msg_buffer_size=1,
) as (ctx, sent):
assert sent is None
async with ctx.open_stream(msg_buffer_size=1) as stream:
count = 0
while count < 3:
for msg in seq:
print(f'caller sending {msg}')
await stream.send(msg)
await trio.sleep(0.1)
batch = []
async for msg in stream:
batch.append(msg)
if batch == seq:
break
count += 1
# here the context should return
assert await ctx.result() == 'yo'
# cancel the daemon
await portal.cancel_actor()
trio.run(main)
@tractor.context
async def sleep_forever(
ctx: tractor.Context,
) -> None:
await ctx.started()
async with ctx.open_stream():
await trio.sleep_forever()
@acm
async def attach_to_sleep_forever():
'''
Cancel a context **before** any underlying error is raised in order
to trigger a local reception of a ``ContextCancelled`` which **should not**
be re-raised in the local surrounding ``Context`` *iff* the cancel was
requested by **this** side of the context.
'''
async with tractor.wait_for_actor('sleeper') as p2:
async with (
p2.open_context(sleep_forever) as (peer_ctx, first),
peer_ctx.open_stream(),
):
try:
yield
finally:
# XXX: previously this would trigger local
# ``ContextCancelled`` to be received and raised in the
# local context overriding any local error due to
# logic inside ``_invoke()`` which checked for
# an error set on ``Context._error`` and raised it in
# under a cancellation scenario.
# The problem is you can have a remote cancellation
# that is part of a local error and we shouldn't raise
# ``ContextCancelled`` **iff** we weren't the side of
# the context to initiate it, i.e.
# ``Context._cancel_called`` should **NOT** have been
# set. The special logic to handle this case is now
# inside ``Context._may_raise_from_remote_msg()`` XD
await peer_ctx.cancel()
@tractor.context
async def error_before_started(
ctx: tractor.Context,
) -> None:
'''
This simulates exactly an original bug discovered in:
https://github.com/pikers/piker/issues/244
'''
async with attach_to_sleep_forever():
# send an unserializable type which should raise a type error
# here and **NOT BE SWALLOWED** by the surrounding acm!!?!
await ctx.started(object())
def test_do_not_swallow_error_before_started_by_remote_contextcancelled():
'''
Verify that an error raised in a remote context which itself opens another
remote context, which it cancels, does not ovverride the original error that
caused the cancellation of the secondardy context.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.start_actor(
'errorer',
enable_modules=[__name__],
)
await n.start_actor(
'sleeper',
enable_modules=[__name__],
)
async with (
portal.open_context(
error_before_started
) as (ctx, sent),
):
await trio.sleep_forever()
with pytest.raises(tractor.RemoteActorError) as excinfo:
trio.run(main)
assert excinfo.value.type == TypeError

View File

@ -0,0 +1,933 @@
"""
That "native" debug mode better work!
All these tests can be understood (somewhat) by running the equivalent
`examples/debugging/` scripts manually.
TODO:
- none of these tests have been run successfully on windows yet but
there's been manual testing that verified it works.
- wonder if any of it'll work on OS X?
"""
import itertools
from os import path
from typing import Optional
import platform
import pathlib
import sys
import time
import pytest
import pexpect
from pexpect.exceptions import (
TIMEOUT,
EOF,
)
from conftest import (
examples_dir,
_ci_env,
)
# TODO: The next great debugger audit could be done by you!
# - recurrent entry to breakpoint() from single actor *after* and an
# error in another task?
# - root error before child errors
# - root error after child errors
# - root error before child breakpoint
# - root error after child breakpoint
# - recurrent root errors
if platform.system() == 'Windows':
pytest.skip(
'Debugger tests have no windows support (yet)',
allow_module_level=True,
)
def mk_cmd(ex_name: str) -> str:
'''
Generate a command suitable to pass to ``pexpect.spawn()``.
'''
script_path: pathlib.Path = examples_dir() / 'debugging' / f'{ex_name}.py'
return ' '.join(['python', str(script_path)])
# TODO: was trying to this xfail style but some weird bug i see in CI
# that's happening at collect time.. pretty soon gonna dump actions i'm
# thinkin...
# in CI we skip tests which >= depth 1 actor trees due to there
# still being an oustanding issue with relaying the debug-mode-state
# through intermediary parents.
has_nested_actors = pytest.mark.has_nested_actors
# .xfail(
# os.environ.get('CI', False),
# reason=(
# 'This test uses nested actors and fails in CI\n'
# 'The test seems to run fine locally but until we solve the '
# 'following issue this CI test will be xfail:\n'
# 'https://github.com/goodboy/tractor/issues/320'
# )
# )
@pytest.fixture
def spawn(
start_method,
testdir,
arb_addr,
) -> 'pexpect.spawn':
if start_method != 'trio':
pytest.skip(
"Debugger tests are only supported on the trio backend"
)
def _spawn(cmd):
return testdir.spawn(
cmd=mk_cmd(cmd),
expect_timeout=3,
)
return _spawn
PROMPT = r"\(Pdb\+\)"
def expect(
child,
# prompt by default
patt: str = PROMPT,
**kwargs,
) -> None:
'''
Expect wrapper that prints last seen console
data before failing.
'''
try:
child.expect(
patt,
**kwargs,
)
except TIMEOUT:
before = str(child.before.decode())
print(before)
raise
def assert_before(
child,
patts: list[str],
) -> None:
before = str(child.before.decode())
for patt in patts:
try:
assert patt in before
except AssertionError:
print(before)
raise
@pytest.fixture(
params=[False, True],
ids='ctl-c={}'.format,
)
def ctlc(
request,
ci_env: bool,
) -> bool:
use_ctlc = request.param
node = request.node
markers = node.own_markers
for mark in markers:
if mark.name == 'has_nested_actors':
pytest.skip(
f'Test {node} has nested actors and fails with Ctrl-C.\n'
f'The test can sometimes run fine locally but until'
' we solve' 'this issue this CI test will be xfail:\n'
'https://github.com/goodboy/tractor/issues/320'
)
if use_ctlc:
# XXX: disable pygments highlighting for auto-tests
# since some envs (like actions CI) will struggle
# the the added color-char encoding..
from tractor._debug import TractorConfig
TractorConfig.use_pygements = False
yield use_ctlc
@pytest.mark.parametrize(
'user_in_out',
[
('c', 'AssertionError'),
('q', 'AssertionError'),
],
ids=lambda item: f'{item[0]} -> {item[1]}',
)
def test_root_actor_error(spawn, user_in_out):
'''
Demonstrate crash handler entering pdb from basic error in root actor.
'''
user_input, expect_err_str = user_in_out
child = spawn('root_actor_error')
# scan for the prompt
expect(child, PROMPT)
before = str(child.before.decode())
# make sure expected logging and error arrives
assert "Attaching to pdb in crashed actor: ('root'" in before
assert 'AssertionError' in before
# send user command
child.sendline(user_input)
# process should exit
expect(child, EOF)
assert expect_err_str in str(child.before)
@pytest.mark.parametrize(
'user_in_out',
[
('c', None),
('q', 'bdb.BdbQuit'),
],
ids=lambda item: f'{item[0]} -> {item[1]}',
)
def test_root_actor_bp(spawn, user_in_out):
"""Demonstrate breakpoint from in root actor.
"""
user_input, expect_err_str = user_in_out
child = spawn('root_actor_breakpoint')
# scan for the prompt
child.expect(PROMPT)
assert 'Error' not in str(child.before)
# send user command
child.sendline(user_input)
child.expect('\r\n')
# process should exit
child.expect(pexpect.EOF)
if expect_err_str is None:
assert 'Error' not in str(child.before)
else:
assert expect_err_str in str(child.before)
def do_ctlc(
child,
count: int = 3,
delay: float = 0.1,
patt: Optional[str] = None,
# expect repl UX to reprint the prompt after every
# ctrl-c send.
# XXX: no idea but, in CI this never seems to work even on 3.10 so
# needs some further investigation potentially...
expect_prompt: bool = not _ci_env,
) -> None:
# make sure ctl-c sends don't do anything but repeat output
for _ in range(count):
time.sleep(delay)
child.sendcontrol('c')
# TODO: figure out why this makes CI fail..
# if you run this test manually it works just fine..
if expect_prompt:
before = str(child.before.decode())
time.sleep(delay)
child.expect(PROMPT)
time.sleep(delay)
if patt:
# should see the last line on console
assert patt in before
def test_root_actor_bp_forever(
spawn,
ctlc: bool,
):
"Re-enter a breakpoint from the root actor-task."
child = spawn('root_actor_breakpoint_forever')
# do some "next" commands to demonstrate recurrent breakpoint
# entries
for _ in range(10):
child.expect(PROMPT)
if ctlc:
do_ctlc(child)
child.sendline('next')
# do one continue which should trigger a
# new task to lock the tty
child.sendline('continue')
child.expect(PROMPT)
# seems that if we hit ctrl-c too fast the
# sigint guard machinery might not kick in..
time.sleep(0.001)
if ctlc:
do_ctlc(child)
# XXX: this previously caused a bug!
child.sendline('n')
child.expect(PROMPT)
child.sendline('n')
child.expect(PROMPT)
# quit out of the loop
child.sendline('q')
child.expect(pexpect.EOF)
@pytest.mark.parametrize(
'do_next',
(True, False),
ids='do_next={}'.format,
)
def test_subactor_error(
spawn,
ctlc: bool,
do_next: bool,
):
'''
Single subactor raising an error
'''
child = spawn('subactor_error')
# scan for the prompt
child.expect(PROMPT)
before = str(child.before.decode())
assert "Attaching to pdb in crashed actor: ('name_error'" in before
if do_next:
child.sendline('n')
else:
# make sure ctl-c sends don't do anything but repeat output
if ctlc:
do_ctlc(
child,
)
# send user command and (in this case it's the same for 'continue'
# vs. 'quit') the debugger should enter a second time in the nursery
# creating actor
child.sendline('continue')
child.expect(PROMPT)
before = str(child.before.decode())
# root actor gets debugger engaged
assert "Attaching to pdb in crashed actor: ('root'" in before
# error is a remote error propagated from the subactor
assert "RemoteActorError: ('name_error'" in before
# another round
if ctlc:
do_ctlc(child)
child.sendline('c')
child.expect('\r\n')
# process should exit
child.expect(pexpect.EOF)
def test_subactor_breakpoint(
spawn,
ctlc: bool,
):
"Single subactor with an infinite breakpoint loop"
child = spawn('subactor_breakpoint')
# scan for the prompt
child.expect(PROMPT)
before = str(child.before.decode())
assert "Attaching pdb to actor: ('breakpoint_forever'" in before
# do some "next" commands to demonstrate recurrent breakpoint
# entries
for _ in range(10):
child.sendline('next')
child.expect(PROMPT)
if ctlc:
do_ctlc(child)
# now run some "continues" to show re-entries
for _ in range(5):
child.sendline('continue')
child.expect(PROMPT)
before = str(child.before.decode())
assert "Attaching pdb to actor: ('breakpoint_forever'" in before
if ctlc:
do_ctlc(child)
# finally quit the loop
child.sendline('q')
# child process should exit but parent will capture pdb.BdbQuit
child.expect(PROMPT)
before = str(child.before.decode())
assert "RemoteActorError: ('breakpoint_forever'" in before
assert 'bdb.BdbQuit' in before
if ctlc:
do_ctlc(child)
# quit the parent
child.sendline('c')
# process should exit
child.expect(pexpect.EOF)
before = str(child.before.decode())
assert "RemoteActorError: ('breakpoint_forever'" in before
assert 'bdb.BdbQuit' in before
@has_nested_actors
def test_multi_subactors(
spawn,
ctlc: bool,
):
'''
Multiple subactors, both erroring and
breakpointing as well as a nested subactor erroring.
'''
child = spawn(r'multi_subactors')
# scan for the prompt
child.expect(PROMPT)
before = str(child.before.decode())
assert "Attaching pdb to actor: ('breakpoint_forever'" in before
if ctlc:
do_ctlc(child)
# do some "next" commands to demonstrate recurrent breakpoint
# entries
for _ in range(10):
child.sendline('next')
child.expect(PROMPT)
if ctlc:
do_ctlc(child)
# continue to next error
child.sendline('c')
# first name_error failure
child.expect(PROMPT)
before = str(child.before.decode())
assert "Attaching to pdb in crashed actor: ('name_error'" in before
assert "NameError" in before
if ctlc:
do_ctlc(child)
# continue again
child.sendline('c')
# 2nd name_error failure
child.expect(PROMPT)
# TODO: will we ever get the race where this crash will show up?
# blocklist strat now prevents this crash
# assert_before(child, [
# "Attaching to pdb in crashed actor: ('name_error_1'",
# "NameError",
# ])
if ctlc:
do_ctlc(child)
# breakpoint loop should re-engage
child.sendline('c')
child.expect(PROMPT)
before = str(child.before.decode())
assert "Attaching pdb to actor: ('breakpoint_forever'" in before
if ctlc:
do_ctlc(child)
# wait for spawn error to show up
spawn_err = "Attaching to pdb in crashed actor: ('spawn_error'"
start = time.time()
while (
spawn_err not in before
and (time.time() - start) < 3 # timeout eventually
):
child.sendline('c')
time.sleep(0.1)
child.expect(PROMPT)
before = str(child.before.decode())
if ctlc:
do_ctlc(child)
# 2nd depth nursery should trigger
# (XXX: this below if guard is technically a hack that makes the
# nested case seem to work locally on linux but ideally in the long
# run this can be dropped.)
if not ctlc:
assert_before(child, [
spawn_err,
"RemoteActorError: ('name_error_1'",
])
# now run some "continues" to show re-entries
for _ in range(5):
child.sendline('c')
child.expect(PROMPT)
# quit the loop and expect parent to attach
child.sendline('q')
child.expect(PROMPT)
before = str(child.before.decode())
assert_before(child, [
# debugger attaches to root
"Attaching to pdb in crashed actor: ('root'",
# expect a multierror with exceptions for each sub-actor
"RemoteActorError: ('breakpoint_forever'",
"RemoteActorError: ('name_error'",
"RemoteActorError: ('spawn_error'",
"RemoteActorError: ('name_error_1'",
'bdb.BdbQuit',
])
if ctlc:
do_ctlc(child)
# process should exit
child.sendline('c')
child.expect(pexpect.EOF)
# repeat of previous multierror for final output
assert_before(child, [
"RemoteActorError: ('breakpoint_forever'",
"RemoteActorError: ('name_error'",
"RemoteActorError: ('spawn_error'",
"RemoteActorError: ('name_error_1'",
'bdb.BdbQuit',
])
def test_multi_daemon_subactors(
spawn,
loglevel: str,
ctlc: bool
):
'''
Multiple daemon subactors, both erroring and breakpointing within a
stream.
'''
child = spawn('multi_daemon_subactors')
child.expect(PROMPT)
# there can be a race for which subactor will acquire
# the root's tty lock first so anticipate either crash
# message on the first entry.
bp_forever_msg = "Attaching pdb to actor: ('bp_forever'"
name_error_msg = "NameError: name 'doggypants' is not defined"
before = str(child.before.decode())
if bp_forever_msg in before:
next_msg = name_error_msg
elif name_error_msg in before:
next_msg = bp_forever_msg
else:
raise ValueError("Neither log msg was found !?")
if ctlc:
do_ctlc(child)
# NOTE: previously since we did not have clobber prevention
# in the root actor this final resume could result in the debugger
# tearing down since both child actors would be cancelled and it was
# unlikely that `bp_forever` would re-acquire the tty lock again.
# Now, we should have a final resumption in the root plus a possible
# second entry by `bp_forever`.
child.sendline('c')
child.expect(PROMPT)
assert_before(child, [next_msg])
# XXX: hooray the root clobbering the child here was fixed!
# IMO, this demonstrates the true power of SC system design.
# now the root actor won't clobber the bp_forever child
# during it's first access to the debug lock, but will instead
# wait for the lock to release, by the edge triggered
# ``_debug.Lock.no_remote_has_tty`` event before sending cancel messages
# (via portals) to its underlings B)
# at some point here there should have been some warning msg from
# the root announcing it avoided a clobber of the child's lock, but
# it seems unreliable in testing here to gnab it:
# assert "in use by child ('bp_forever'," in before
if ctlc:
do_ctlc(child)
# expect another breakpoint actor entry
child.sendline('c')
child.expect(PROMPT)
try:
assert_before(child, [bp_forever_msg])
except AssertionError:
assert_before(child, [name_error_msg])
else:
if ctlc:
do_ctlc(child)
# should crash with the 2nd name error (simulates
# a retry) and then the root eventually (boxed) errors
# after 1 or more further bp actor entries.
child.sendline('c')
child.expect(PROMPT)
assert_before(child, [name_error_msg])
# wait for final error in root
# where it crashs with boxed error
while True:
try:
child.sendline('c')
child.expect(PROMPT)
assert_before(
child,
[bp_forever_msg]
)
except AssertionError:
break
assert_before(
child,
[
# boxed error raised in root task
"Attaching to pdb in crashed actor: ('root'",
"_exceptions.RemoteActorError: ('name_error'",
]
)
child.sendline('c')
child.expect(pexpect.EOF)
@has_nested_actors
def test_multi_subactors_root_errors(
spawn,
ctlc: bool
):
'''
Multiple subactors, both erroring and breakpointing as well as
a nested subactor erroring.
'''
child = spawn('multi_subactor_root_errors')
# scan for the prompt
child.expect(PROMPT)
# at most one subactor should attach before the root is cancelled
before = str(child.before.decode())
assert "NameError: name 'doggypants' is not defined" in before
if ctlc:
do_ctlc(child)
# continue again to catch 2nd name error from
# actor 'name_error_1' (which is 2nd depth).
child.sendline('c')
# due to block list strat from #337, this will no longer
# propagate before the root errors and cancels the spawner sub-tree.
child.expect(PROMPT)
# only if the blocking condition doesn't kick in fast enough
before = str(child.before.decode())
if "Debug lock blocked for ['name_error_1'" not in before:
assert_before(child, [
"Attaching to pdb in crashed actor: ('name_error_1'",
"NameError",
])
if ctlc:
do_ctlc(child)
child.sendline('c')
child.expect(PROMPT)
# check if the spawner crashed or was blocked from debug
# and if this intermediary attached check the boxed error
before = str(child.before.decode())
if "Attaching to pdb in crashed actor: ('spawn_error'" in before:
assert_before(child, [
# boxed error from spawner's child
"RemoteActorError: ('name_error_1'",
"NameError",
])
if ctlc:
do_ctlc(child)
child.sendline('c')
child.expect(PROMPT)
# expect a root actor crash
assert_before(child, [
"RemoteActorError: ('name_error'",
"NameError",
# error from root actor and root task that created top level nursery
"Attaching to pdb in crashed actor: ('root'",
"AssertionError",
])
child.sendline('c')
child.expect(pexpect.EOF)
assert_before(child, [
# "Attaching to pdb in crashed actor: ('root'",
# boxed error from previous step
"RemoteActorError: ('name_error'",
"NameError",
"AssertionError",
'assert 0',
])
@has_nested_actors
def test_multi_nested_subactors_error_through_nurseries(
spawn,
# TODO: address debugger issue for nested tree:
# https://github.com/goodboy/tractor/issues/320
# ctlc: bool,
):
"""Verify deeply nested actors that error trigger debugger entries
at each actor nurserly (level) all the way up the tree.
"""
# NOTE: previously, inside this script was a bug where if the
# parent errors before a 2-levels-lower actor has released the lock,
# the parent tries to cancel it but it's stuck in the debugger?
# A test (below) has now been added to explicitly verify this is
# fixed.
child = spawn('multi_nested_subactors_error_up_through_nurseries')
timed_out_early: bool = False
for send_char in itertools.cycle(['c', 'q']):
try:
child.expect(PROMPT)
child.sendline(send_char)
time.sleep(0.01)
except EOF:
break
assert_before(child, [
# boxed source errors
"NameError: name 'doggypants' is not defined",
"tractor._exceptions.RemoteActorError: ('name_error'",
"bdb.BdbQuit",
# first level subtrees
"tractor._exceptions.RemoteActorError: ('spawner0'",
# "tractor._exceptions.RemoteActorError: ('spawner1'",
# propagation of errors up through nested subtrees
"tractor._exceptions.RemoteActorError: ('spawn_until_0'",
"tractor._exceptions.RemoteActorError: ('spawn_until_1'",
"tractor._exceptions.RemoteActorError: ('spawn_until_2'",
])
@pytest.mark.timeout(15)
@has_nested_actors
def test_root_nursery_cancels_before_child_releases_tty_lock(
spawn,
start_method,
ctlc: bool,
):
'''
Test that when the root sends a cancel message before a nested child
has unblocked (which can happen when it has the tty lock and is
engaged in pdb) it is indeed cancelled after exiting the debugger.
'''
timed_out_early = False
child = spawn('root_cancelled_but_child_is_in_tty_lock')
child.expect(PROMPT)
before = str(child.before.decode())
assert "NameError: name 'doggypants' is not defined" in before
assert "tractor._exceptions.RemoteActorError: ('name_error'" not in before
time.sleep(0.5)
if ctlc:
do_ctlc(child)
child.sendline('c')
for i in range(4):
time.sleep(0.5)
try:
child.expect(PROMPT)
except (
EOF,
TIMEOUT,
):
# races all over..
print(f"Failed early on {i}?")
before = str(child.before.decode())
timed_out_early = True
# race conditions on how fast the continue is sent?
break
before = str(child.before.decode())
assert "NameError: name 'doggypants' is not defined" in before
if ctlc:
do_ctlc(child)
child.sendline('c')
time.sleep(0.1)
for i in range(3):
try:
child.expect(pexpect.EOF, timeout=0.5)
break
except TIMEOUT:
child.sendline('c')
time.sleep(0.1)
print('child was able to grab tty lock again?')
else:
print('giving up on child releasing, sending `quit` cmd')
child.sendline('q')
expect(child, EOF)
if not timed_out_early:
before = str(child.before.decode())
assert_before(child, [
"tractor._exceptions.RemoteActorError: ('spawner0'",
"tractor._exceptions.RemoteActorError: ('name_error'",
"NameError: name 'doggypants' is not defined",
])
def test_root_cancels_child_context_during_startup(
spawn,
ctlc: bool,
):
'''Verify a fast fail in the root doesn't lock up the child reaping
and all while using the new context api.
'''
child = spawn('fast_error_in_root_after_spawn')
child.expect(PROMPT)
before = str(child.before.decode())
assert "AssertionError" in before
if ctlc:
do_ctlc(child)
child.sendline('c')
child.expect(pexpect.EOF)
def test_different_debug_mode_per_actor(
spawn,
ctlc: bool,
):
child = spawn('per_actor_debug')
child.expect(PROMPT)
# only one actor should enter the debugger
before = str(child.before.decode())
assert "Attaching to pdb in crashed actor: ('debugged_boi'" in before
assert "RuntimeError" in before
if ctlc:
do_ctlc(child)
child.sendline('c')
child.expect(pexpect.EOF)
before = str(child.before.decode())
# NOTE: this debugged actor error currently WON'T show up since the
# root will actually cancel and terminate the nursery before the error
# msg reported back from the debug mode actor is processed.
# assert "tractor._exceptions.RemoteActorError: ('debugged_boi'" in before
assert "tractor._exceptions.RemoteActorError: ('crash_boi'" in before
# the crash boi should not have made a debugger request but
# instead crashed completely
assert "tractor._exceptions.RemoteActorError: ('crash_boi'" in before
assert "RuntimeError" in before

View File

@ -1,6 +1,12 @@
""" """
Actor "discovery" testing Actor "discovery" testing
""" """
import os
import signal
import platform
from functools import partial
import itertools
import pytest import pytest
import tractor import tractor
import trio import trio
@ -14,26 +20,29 @@ async def test_reg_then_unreg(arb_addr):
assert actor.is_arbiter assert actor.is_arbiter
assert len(actor._registry) == 1 # only self is registered assert len(actor._registry) == 1 # only self is registered
async with tractor.open_nursery() as n: async with tractor.open_nursery(
portal = await n.start_actor('actor', rpc_module_paths=[__name__]) arbiter_addr=arb_addr,
) as n:
portal = await n.start_actor('actor', enable_modules=[__name__])
uid = portal.channel.uid uid = portal.channel.uid
async with tractor.get_arbiter(*arb_addr) as aportal: async with tractor.get_arbiter(*arb_addr) as aportal:
# local actor should be the arbiter # this local actor should be the arbiter
assert actor is aportal.actor assert actor is aportal.actor
# sub-actor uid should be in the registry async with tractor.wait_for_actor('actor'):
await trio.sleep(0.1) # registering is async, so.. # sub-actor uid should be in the registry
assert uid in aportal.actor._registry assert uid in aportal.actor._registry
sockaddrs = actor._registry[uid] sockaddrs = actor._registry[uid]
# XXX: can we figure out what the listen addr will be? # XXX: can we figure out what the listen addr will be?
assert sockaddrs assert sockaddrs
await n.cancel() # tear down nursery await n.cancel() # tear down nursery
await trio.sleep(0.1) await trio.sleep(0.1)
assert uid not in aportal.actor._registry assert uid not in aportal.actor._registry
sockaddrs = actor._registry[uid] sockaddrs = actor._registry.get(uid)
assert not sockaddrs assert not sockaddrs
@ -45,20 +54,22 @@ async def hi():
async def say_hello(other_actor): async def say_hello(other_actor):
await trio.sleep(0.4) # wait for other actor to spawn await trio.sleep(1) # wait for other actor to spawn
async with tractor.find_actor(other_actor) as portal: async with tractor.find_actor(other_actor) as portal:
assert portal is not None
return await portal.run(__name__, 'hi') return await portal.run(__name__, 'hi')
async def say_hello_use_wait(other_actor): async def say_hello_use_wait(other_actor):
async with tractor.wait_for_actor(other_actor) as portal: async with tractor.wait_for_actor(other_actor) as portal:
assert portal is not None
result = await portal.run(__name__, 'hi') result = await portal.run(__name__, 'hi')
return result return result
@tractor_test @tractor_test
@pytest.mark.parametrize('func', [say_hello, say_hello_use_wait]) @pytest.mark.parametrize('func', [say_hello, say_hello_use_wait])
async def test_trynamic_trio(func): async def test_trynamic_trio(func, start_method, arb_addr):
"""Main tractor entry point, the "master" process (for now """Main tractor entry point, the "master" process (for now
acts as the "director"). acts as the "director").
""" """
@ -66,15 +77,292 @@ async def test_trynamic_trio(func):
print("Alright... Action!") print("Alright... Action!")
donny = await n.run_in_actor( donny = await n.run_in_actor(
'donny',
func, func,
other_actor='gretchen', other_actor='gretchen',
name='donny',
) )
gretchen = await n.run_in_actor( gretchen = await n.run_in_actor(
'gretchen',
func, func,
other_actor='donny', other_actor='donny',
name='gretchen',
) )
print(await gretchen.result()) print(await gretchen.result())
print(await donny.result()) print(await donny.result())
print("CUTTTT CUUTT CUT!!?! Donny!! You're supposed to say...") print("CUTTTT CUUTT CUT!!?! Donny!! You're supposed to say...")
async def stream_forever():
for i in itertools.count():
yield i
await trio.sleep(0.01)
async def cancel(use_signal, delay=0):
# hold on there sally
await trio.sleep(delay)
# trigger cancel
if use_signal:
if platform.system() == 'Windows':
pytest.skip("SIGINT not supported on windows")
os.kill(os.getpid(), signal.SIGINT)
else:
raise KeyboardInterrupt
async def stream_from(portal):
async with portal.open_stream_from(stream_forever) as stream:
async for value in stream:
print(value)
async def unpack_reg(actor_or_portal):
'''
Get and unpack a "registry" RPC request from the "arbiter" registry
system.
'''
if getattr(actor_or_portal, 'get_registry', None):
msg = await actor_or_portal.get_registry()
else:
msg = await actor_or_portal.run_from_ns('self', 'get_registry')
return {tuple(key.split('.')): val for key, val in msg.items()}
async def spawn_and_check_registry(
arb_addr: tuple,
use_signal: bool,
remote_arbiter: bool = False,
with_streaming: bool = False,
) -> None:
async with tractor.open_root_actor(
arbiter_addr=arb_addr,
):
async with tractor.get_arbiter(*arb_addr) as portal:
# runtime needs to be up to call this
actor = tractor.current_actor()
if remote_arbiter:
assert not actor.is_arbiter
if actor.is_arbiter:
extra = 1 # arbiter is local root actor
get_reg = partial(unpack_reg, actor)
else:
get_reg = partial(unpack_reg, portal)
extra = 2 # local root actor + remote arbiter
# ensure current actor is registered
registry = await get_reg()
assert actor.uid in registry
try:
async with tractor.open_nursery() as n:
async with trio.open_nursery() as trion:
portals = {}
for i in range(3):
name = f'a{i}'
if with_streaming:
portals[name] = await n.start_actor(
name=name, enable_modules=[__name__])
else: # no streaming
portals[name] = await n.run_in_actor(
trio.sleep_forever, name=name)
# wait on last actor to come up
async with tractor.wait_for_actor(name):
registry = await get_reg()
for uid in n._children:
assert uid in registry
assert len(portals) + extra == len(registry)
if with_streaming:
await trio.sleep(0.1)
pts = list(portals.values())
for p in pts[:-1]:
trion.start_soon(stream_from, p)
# stream for 1 sec
trion.start_soon(cancel, use_signal, 1)
last_p = pts[-1]
await stream_from(last_p)
else:
await cancel(use_signal)
finally:
await trio.sleep(0.5)
# all subactors should have de-registered
registry = await get_reg()
assert len(registry) == extra
assert actor.uid in registry
@pytest.mark.parametrize('use_signal', [False, True])
@pytest.mark.parametrize('with_streaming', [False, True])
def test_subactors_unregister_on_cancel(
start_method,
use_signal,
arb_addr,
with_streaming,
):
"""Verify that cancelling a nursery results in all subactors
deregistering themselves with the arbiter.
"""
with pytest.raises(KeyboardInterrupt):
trio.run(
partial(
spawn_and_check_registry,
arb_addr,
use_signal,
remote_arbiter=False,
with_streaming=with_streaming,
),
)
@pytest.mark.parametrize('use_signal', [False, True])
@pytest.mark.parametrize('with_streaming', [False, True])
def test_subactors_unregister_on_cancel_remote_daemon(
daemon,
start_method,
use_signal,
arb_addr,
with_streaming,
):
"""Verify that cancelling a nursery results in all subactors
deregistering themselves with a **remote** (not in the local process
tree) arbiter.
"""
with pytest.raises(KeyboardInterrupt):
trio.run(
partial(
spawn_and_check_registry,
arb_addr,
use_signal,
remote_arbiter=True,
with_streaming=with_streaming,
),
)
async def streamer(agen):
async for item in agen:
print(item)
async def close_chans_before_nursery(
arb_addr: tuple,
use_signal: bool,
remote_arbiter: bool = False,
) -> None:
# logic for how many actors should still be
# in the registry at teardown.
if remote_arbiter:
entries_at_end = 2
else:
entries_at_end = 1
async with tractor.open_root_actor(
arbiter_addr=arb_addr,
):
async with tractor.get_arbiter(*arb_addr) as aportal:
try:
get_reg = partial(unpack_reg, aportal)
async with tractor.open_nursery() as tn:
portal1 = await tn.start_actor(
name='consumer1', enable_modules=[__name__])
portal2 = await tn.start_actor(
'consumer2', enable_modules=[__name__])
# TODO: compact this back as was in last commit once
# 3.9+, see https://github.com/goodboy/tractor/issues/207
async with portal1.open_stream_from(
stream_forever
) as agen1:
async with portal2.open_stream_from(
stream_forever
) as agen2:
async with trio.open_nursery() as n:
n.start_soon(streamer, agen1)
n.start_soon(cancel, use_signal, .5)
try:
await streamer(agen2)
finally:
# Kill the root nursery thus resulting in
# normal arbiter channel ops to fail during
# teardown. It doesn't seem like this is
# reliably triggered by an external SIGINT.
# tractor.current_actor()._root_nursery.cancel_scope.cancel()
# XXX: THIS IS THE KEY THING that
# happens **before** exiting the
# actor nursery block
# also kill off channels cuz why not
await agen1.aclose()
await agen2.aclose()
finally:
with trio.CancelScope(shield=True):
await trio.sleep(1)
# all subactors should have de-registered
registry = await get_reg()
assert portal1.channel.uid not in registry
assert portal2.channel.uid not in registry
assert len(registry) == entries_at_end
@pytest.mark.parametrize('use_signal', [False, True])
def test_close_channel_explicit(
start_method,
use_signal,
arb_addr,
):
"""Verify that closing a stream explicitly and killing the actor's
"root nursery" **before** the containing nursery tears down also
results in subactor(s) deregistering from the arbiter.
"""
with pytest.raises(KeyboardInterrupt):
trio.run(
partial(
close_chans_before_nursery,
arb_addr,
use_signal,
remote_arbiter=False,
),
)
@pytest.mark.parametrize('use_signal', [False, True])
def test_close_channel_explicit_remote_arbiter(
daemon,
start_method,
use_signal,
arb_addr,
):
"""Verify that closing a stream explicitly and killing the actor's
"root nursery" **before** the containing nursery tears down also
results in subactor(s) deregistering from the arbiter.
"""
with pytest.raises(KeyboardInterrupt):
trio.run(
partial(
close_chans_before_nursery,
arb_addr,
use_signal,
remote_arbiter=True,
),
)

View File

@ -0,0 +1,135 @@
'''
Let's make sure them docs work yah?
'''
from contextlib import contextmanager
import itertools
import os
import sys
import subprocess
import platform
import shutil
import pytest
from conftest import (
examples_dir,
)
@pytest.fixture
def run_example_in_subproc(
loglevel: str,
testdir,
arb_addr: tuple[str, int],
):
@contextmanager
def run(script_code):
kwargs = dict()
if platform.system() == 'Windows':
# on windows we need to create a special __main__.py which will
# be executed with ``python -m <modulename>`` on windows..
shutil.copyfile(
examples_dir() / '__main__.py',
str(testdir / '__main__.py'),
)
# drop the ``if __name__ == '__main__'`` guard onwards from
# the *NIX version of each script
windows_script_lines = itertools.takewhile(
lambda line: "if __name__ ==" not in line,
script_code.splitlines()
)
script_code = '\n'.join(windows_script_lines)
script_file = testdir.makefile('.py', script_code)
# without this, tests hang on windows forever
kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
# run the testdir "libary module" as a script
cmdargs = [
sys.executable,
'-m',
# use the "module name" of this "package"
'test_example'
]
else:
script_file = testdir.makefile('.py', script_code)
cmdargs = [
sys.executable,
str(script_file),
]
# XXX: BE FOREVER WARNED: if you enable lots of tractor logging
# in the subprocess it may cause infinite blocking on the pipes
# due to backpressure!!!
proc = testdir.popen(
cmdargs,
**kwargs,
)
assert not proc.returncode
yield proc
proc.wait()
assert proc.returncode == 0
yield run
@pytest.mark.parametrize(
'example_script',
# walk yields: (dirpath, dirnames, filenames)
[
(p[0], f) for p in os.walk(examples_dir()) for f in p[2]
if '__' not in f
and f[0] != '_'
and 'debugging' not in p[0]
and 'integration' not in p[0]
and 'advanced_faults' not in p[0]
],
ids=lambda t: t[1],
)
def test_example(run_example_in_subproc, example_script):
"""Load and run scripts from this repo's ``examples/`` dir as a user
would copy and pasing them into their editor.
On windows a little more "finessing" is done to make
``multiprocessing`` play nice: we copy the ``__main__.py`` into the
test directory and invoke the script as a module with ``python -m
test_example``.
"""
ex_file = os.path.join(*example_script)
if 'rpc_bidir_streaming' in ex_file and sys.version_info < (3, 9):
pytest.skip("2-way streaming example requires py3.9 async with syntax")
with open(ex_file, 'r') as ex:
code = ex.read()
with run_example_in_subproc(code) as proc:
proc.wait()
err, _ = proc.stderr.read(), proc.stdout.read()
# print(f'STDERR: {err}')
# print(f'STDOUT: {out}')
# if we get some gnarly output let's aggregate and raise
if err:
errmsg = err.decode()
errlines = errmsg.splitlines()
last_error = errlines[-1]
if (
'Error' in last_error
# XXX: currently we print this to console, but maybe
# shouldn't eventually once we figure out what's
# a better way to be explicit about aio side
# cancels?
and 'asyncio.exceptions.CancelledError' not in last_error
):
raise Exception(errmsg)
assert proc.returncode == 0

View File

@ -0,0 +1,564 @@
'''
The hipster way to force SC onto the stdlib's "async": 'infection mode'.
'''
from typing import Optional, Iterable, Union
import asyncio
import builtins
import itertools
import importlib
from exceptiongroup import BaseExceptionGroup
import pytest
import trio
import tractor
from tractor import (
to_asyncio,
RemoteActorError,
)
from tractor.trionics import BroadcastReceiver
async def sleep_and_err(
sleep_for: float = 0.1,
# just signature placeholders for compat with
# ``to_asyncio.open_channel_from()``
to_trio: Optional[trio.MemorySendChannel] = None,
from_trio: Optional[asyncio.Queue] = None,
):
if to_trio:
to_trio.send_nowait('start')
await asyncio.sleep(sleep_for)
assert 0
async def sleep_forever():
await asyncio.sleep(float('inf'))
async def trio_cancels_single_aio_task():
# spawn an ``asyncio`` task to run a func and return result
with trio.move_on_after(.2):
await tractor.to_asyncio.run_task(sleep_forever)
def test_trio_cancels_aio_on_actor_side(arb_addr):
'''
Spawn an infected actor that is cancelled by the ``trio`` side
task using std cancel scope apis.
'''
async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr
) as n:
await n.run_in_actor(
trio_cancels_single_aio_task,
infect_asyncio=True,
)
trio.run(main)
async def asyncio_actor(
target: str,
expect_err: Optional[Exception] = None
) -> None:
assert tractor.current_actor().is_infected_aio()
target = globals()[target]
if '.' in expect_err:
modpath, _, name = expect_err.rpartition('.')
mod = importlib.import_module(modpath)
error_type = getattr(mod, name)
else: # toplevel builtin error type
error_type = builtins.__dict__.get(expect_err)
try:
# spawn an ``asyncio`` task to run a func and return result
await tractor.to_asyncio.run_task(target)
except BaseException as err:
if expect_err:
assert isinstance(err, error_type)
raise
def test_aio_simple_error(arb_addr):
'''
Verify a simple remote asyncio error propagates back through trio
to the parent actor.
'''
async def main():
async with tractor.open_nursery(
arbiter_addr=arb_addr
) as n:
await n.run_in_actor(
asyncio_actor,
target='sleep_and_err',
expect_err='AssertionError',
infect_asyncio=True,
)
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
err = excinfo.value
assert isinstance(err, RemoteActorError)
assert err.type == AssertionError
def test_tractor_cancels_aio(arb_addr):
'''
Verify we can cancel a spawned asyncio task gracefully.
'''
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
asyncio_actor,
target='sleep_forever',
expect_err='trio.Cancelled',
infect_asyncio=True,
)
# cancel the entire remote runtime
await portal.cancel_actor()
trio.run(main)
def test_trio_cancels_aio(arb_addr):
'''
Much like the above test with ``tractor.Portal.cancel_actor()``
except we just use a standard ``trio`` cancellation api.
'''
async def main():
with trio.move_on_after(1):
# cancel the nursery shortly after boot
async with tractor.open_nursery() as n:
await n.run_in_actor(
asyncio_actor,
target='sleep_forever',
expect_err='trio.Cancelled',
infect_asyncio=True,
)
trio.run(main)
@tractor.context
async def trio_ctx(
ctx: tractor.Context,
):
await ctx.started('start')
# this will block until the ``asyncio`` task sends a "first"
# message.
with trio.fail_after(2):
async with (
trio.open_nursery() as n,
tractor.to_asyncio.open_channel_from(
sleep_and_err,
) as (first, chan),
):
assert first == 'start'
# spawn another asyncio task for the cuck of it.
n.start_soon(
tractor.to_asyncio.run_task,
sleep_forever,
)
await trio.sleep_forever()
@pytest.mark.parametrize(
'parent_cancels', [False, True],
ids='parent_actor_cancels_child={}'.format
)
def test_context_spawns_aio_task_that_errors(
arb_addr,
parent_cancels: bool,
):
'''
Verify that spawning a task via an intertask channel ctx mngr that
errors correctly propagates the error back from the `asyncio`-side
task.
'''
async def main():
with trio.fail_after(2):
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_daemon',
enable_modules=[__name__],
infect_asyncio=True,
# debug_mode=True,
loglevel='cancel',
)
async with p.open_context(
trio_ctx,
) as (ctx, first):
assert first == 'start'
if parent_cancels:
await p.cancel_actor()
await trio.sleep_forever()
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
err = excinfo.value
assert isinstance(err, RemoteActorError)
if parent_cancels:
assert err.type == trio.Cancelled
else:
assert err.type == AssertionError
async def aio_cancel():
''''
Cancel urself boi.
'''
await asyncio.sleep(0.5)
task = asyncio.current_task()
# cancel and enter sleep
task.cancel()
await sleep_forever()
def test_aio_cancelled_from_aio_causes_trio_cancelled(arb_addr):
async def main():
async with tractor.open_nursery() as n:
await n.run_in_actor(
asyncio_actor,
target='aio_cancel',
expect_err='tractor.to_asyncio.AsyncioCancelled',
infect_asyncio=True,
)
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
# ensure boxed error is correct
assert excinfo.value.type == to_asyncio.AsyncioCancelled
# TODO: verify open_channel_from will fail on this..
async def no_to_trio_in_args():
pass
async def push_from_aio_task(
sequence: Iterable,
to_trio: trio.abc.SendChannel,
expect_cancel: False,
fail_early: bool,
) -> None:
try:
# sync caller ctx manager
to_trio.send_nowait(True)
for i in sequence:
print(f'asyncio sending {i}')
to_trio.send_nowait(i)
await asyncio.sleep(0.001)
if i == 50 and fail_early:
raise Exception
print('asyncio streamer complete!')
except asyncio.CancelledError:
if not expect_cancel:
pytest.fail("aio task was cancelled unexpectedly")
raise
else:
if expect_cancel:
pytest.fail("aio task wasn't cancelled as expected!?")
async def stream_from_aio(
exit_early: bool = False,
raise_err: bool = False,
aio_raise_err: bool = False,
fan_out: bool = False,
) -> None:
seq = range(100)
expect = list(seq)
try:
pulled = []
async with to_asyncio.open_channel_from(
push_from_aio_task,
sequence=seq,
expect_cancel=raise_err or exit_early,
fail_early=aio_raise_err,
) as (first, chan):
assert first is True
async def consume(
chan: Union[
to_asyncio.LinkedTaskChannel,
BroadcastReceiver,
],
):
async for value in chan:
print(f'trio received {value}')
pulled.append(value)
if value == 50:
if raise_err:
raise Exception
elif exit_early:
break
if fan_out:
# start second task that get's the same stream value set.
async with (
# NOTE: this has to come first to avoid
# the channel being closed before the nursery
# tasks are joined..
chan.subscribe() as br,
trio.open_nursery() as n,
):
n.start_soon(consume, br)
await consume(chan)
else:
await consume(chan)
finally:
if (
not raise_err and
not exit_early and
not aio_raise_err
):
if fan_out:
# we get double the pulled values in the
# ``.subscribe()`` fan out case.
doubled = list(itertools.chain(*zip(expect, expect)))
expect = doubled[:len(pulled)]
assert list(sorted(pulled)) == expect
else:
assert pulled == expect
else:
assert not fan_out
assert pulled == expect[:51]
print('trio guest mode task completed!')
@pytest.mark.parametrize(
'fan_out', [False, True],
ids='fan_out_w_chan_subscribe={}'.format
)
def test_basic_interloop_channel_stream(arb_addr, fan_out):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
infect_asyncio=True,
fan_out=fan_out,
)
await portal.result()
trio.run(main)
# TODO: parametrize the above test and avoid the duplication here?
def test_trio_error_cancels_intertask_chan(arb_addr):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
raise_err=True,
infect_asyncio=True,
)
# should trigger remote actor error
await portal.result()
with pytest.raises(BaseExceptionGroup) as excinfo:
trio.run(main)
# ensure boxed errors
for exc in excinfo.value.exceptions:
assert exc.type == Exception
def test_trio_closes_early_and_channel_exits(arb_addr):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
exit_early=True,
infect_asyncio=True,
)
# should trigger remote actor error
await portal.result()
# should be a quiet exit on a simple channel exit
trio.run(main)
def test_aio_errors_and_channel_propagates_and_closes(arb_addr):
async def main():
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
stream_from_aio,
aio_raise_err=True,
infect_asyncio=True,
)
# should trigger remote actor error
await portal.result()
with pytest.raises(BaseExceptionGroup) as excinfo:
trio.run(main)
# ensure boxed errors
for exc in excinfo.value.exceptions:
assert exc.type == Exception
@tractor.context
async def trio_to_aio_echo_server(
ctx: tractor.Context,
):
async def aio_echo_server(
to_trio: trio.MemorySendChannel,
from_trio: asyncio.Queue,
) -> None:
to_trio.send_nowait('start')
while True:
msg = await from_trio.get()
# echo the msg back
to_trio.send_nowait(msg)
# if we get the terminate sentinel
# break the echo loop
if msg is None:
print('breaking aio echo loop')
break
print('exiting asyncio task')
async with to_asyncio.open_channel_from(
aio_echo_server,
) as (first, chan):
assert first == 'start'
await ctx.started(first)
async with ctx.open_stream() as stream:
async for msg in stream:
print(f'asyncio echoing {msg}')
await chan.send(msg)
out = await chan.receive()
# echo back to parent actor-task
await stream.send(out)
if out is None:
try:
out = await chan.receive()
except trio.EndOfChannel:
break
else:
raise RuntimeError('aio channel never stopped?')
@pytest.mark.parametrize(
'raise_error_mid_stream',
[False, Exception, KeyboardInterrupt],
ids='raise_error={}'.format,
)
def test_echoserver_detailed_mechanics(
arb_addr,
raise_error_mid_stream,
):
async def main():
async with tractor.open_nursery() as n:
p = await n.start_actor(
'aio_server',
enable_modules=[__name__],
infect_asyncio=True,
)
async with p.open_context(
trio_to_aio_echo_server,
) as (ctx, first):
assert first == 'start'
async with ctx.open_stream() as stream:
for i in range(100):
await stream.send(i)
out = await stream.receive()
assert i == out
if raise_error_mid_stream and i == 50:
raise raise_error_mid_stream
# send terminate msg
await stream.send(None)
out = await stream.receive()
assert out is None
if out is None:
# ensure the stream is stopped
# with trio.fail_after(0.1):
try:
await stream.receive()
except trio.EndOfChannel:
pass
else:
pytest.fail(
"stream wasn't stopped after sentinel?!")
# TODO: the case where this blocks and
# is cancelled by kbi or out of task cancellation
await p.cancel_actor()
if raise_error_mid_stream:
with pytest.raises(raise_error_mid_stream):
trio.run(main)
else:
trio.run(main)

View File

@ -0,0 +1,352 @@
"""
Streaming via async gen api
"""
import time
from functools import partial
import platform
import trio
import tractor
import pytest
from conftest import tractor_test
def test_must_define_ctx():
with pytest.raises(TypeError) as err:
@tractor.stream
async def no_ctx():
pass
assert "no_ctx must be `ctx: tractor.Context" in str(err.value)
@tractor.stream
async def has_ctx(ctx):
pass
async def async_gen_stream(sequence):
for i in sequence:
yield i
await trio.sleep(0.1)
# block indefinitely waiting to be cancelled by ``aclose()`` call
with trio.CancelScope() as cs:
await trio.sleep_forever()
assert 0
assert cs.cancelled_caught
@tractor.stream
async def context_stream(
ctx: tractor.Context,
sequence
):
for i in sequence:
await ctx.send_yield(i)
await trio.sleep(0.1)
# block indefinitely waiting to be cancelled by ``aclose()`` call
with trio.CancelScope() as cs:
await trio.sleep(float('inf'))
assert 0
assert cs.cancelled_caught
async def stream_from_single_subactor(
arb_addr,
start_method,
stream_func,
):
"""Verify we can spawn a daemon actor and retrieve streamed data.
"""
# only one per host address, spawns an actor if None
async with tractor.open_nursery(
arbiter_addr=arb_addr,
start_method=start_method,
) as nursery:
async with tractor.find_actor('streamerd') as portals:
if not portals:
# no brokerd actor found
portal = await nursery.start_actor(
'streamerd',
enable_modules=[__name__],
)
seq = range(10)
with trio.fail_after(5):
async with portal.open_stream_from(
stream_func,
sequence=list(seq), # has to be msgpack serializable
) as stream:
# it'd sure be nice to have an asyncitertools here...
iseq = iter(seq)
ival = next(iseq)
async for val in stream:
assert val == ival
try:
ival = next(iseq)
except StopIteration:
# should cancel far end task which will be
# caught and no error is raised
await stream.aclose()
await trio.sleep(0.3)
# ensure EOC signalled-state translates
# XXX: not really sure this is correct,
# shouldn't it be a `ClosedResourceError`?
try:
await stream.__anext__()
except StopAsyncIteration:
# stop all spawned subactors
await portal.cancel_actor()
@pytest.mark.parametrize(
'stream_func', [async_gen_stream, context_stream]
)
def test_stream_from_single_subactor(arb_addr, start_method, stream_func):
"""Verify streaming from a spawned async generator.
"""
trio.run(
partial(
stream_from_single_subactor,
arb_addr,
start_method,
stream_func=stream_func,
),
)
# this is the first 2 actors, streamer_1 and streamer_2
async def stream_data(seed):
for i in range(seed):
yield i
# trigger scheduler to simulate practical usage
await trio.sleep(0.0001)
# this is the third actor; the aggregator
async def aggregate(seed):
"""Ensure that the two streams we receive match but only stream
a single set of values to the parent.
"""
async with tractor.open_nursery() as nursery:
portals = []
for i in range(1, 3):
# fork point
portal = await nursery.start_actor(
name=f'streamer_{i}',
enable_modules=[__name__],
)
portals.append(portal)
send_chan, recv_chan = trio.open_memory_channel(500)
async def push_to_chan(portal, send_chan):
async with send_chan:
async with portal.open_stream_from(
stream_data, seed=seed,
) as stream:
async for value in stream:
# leverage trio's built-in backpressure
await send_chan.send(value)
print(f"FINISHED ITERATING {portal.channel.uid}")
# spawn 2 trio tasks to collect streams and push to a local queue
async with trio.open_nursery() as n:
for portal in portals:
n.start_soon(push_to_chan, portal, send_chan.clone())
# close this local task's reference to send side
await send_chan.aclose()
unique_vals = set()
async with recv_chan:
async for value in recv_chan:
if value not in unique_vals:
unique_vals.add(value)
# yield upwards to the spawning parent actor
yield value
assert value in unique_vals
print("FINISHED ITERATING in aggregator")
await nursery.cancel()
print("WAITING on `ActorNursery` to finish")
print("AGGREGATOR COMPLETE!")
# this is the main actor and *arbiter*
async def a_quadruple_example():
# a nursery which spawns "actors"
async with tractor.open_nursery() as nursery:
seed = int(1e3)
pre_start = time.time()
portal = await nursery.start_actor(
name='aggregator',
enable_modules=[__name__],
)
start = time.time()
# the portal call returns exactly what you'd expect
# as if the remote "aggregate" function was called locally
result_stream = []
async with portal.open_stream_from(aggregate, seed=seed) as stream:
async for value in stream:
result_stream.append(value)
print(f"STREAM TIME = {time.time() - start}")
print(f"STREAM + SPAWN TIME = {time.time() - pre_start}")
assert result_stream == list(range(seed))
await portal.cancel_actor()
return result_stream
async def cancel_after(wait, arb_addr):
async with tractor.open_root_actor(arbiter_addr=arb_addr):
with trio.move_on_after(wait):
return await a_quadruple_example()
@pytest.fixture(scope='module')
def time_quad_ex(arb_addr, ci_env, spawn_backend):
if spawn_backend == 'mp':
"""no idea but the mp *nix runs are flaking out here often...
"""
pytest.skip("Test is too flaky on mp in CI")
timeout = 7 if platform.system() in ('Windows', 'Darwin') else 4
start = time.time()
results = trio.run(cancel_after, timeout, arb_addr)
diff = time.time() - start
assert results
return results, diff
def test_a_quadruple_example(time_quad_ex, ci_env, spawn_backend):
"""This also serves as a kind of "we'd like to be this fast test"."""
results, diff = time_quad_ex
assert results
this_fast = 6 if platform.system() in ('Windows', 'Darwin') else 3
assert diff < this_fast
@pytest.mark.parametrize(
'cancel_delay',
list(map(lambda i: i/10, range(3, 9)))
)
def test_not_fast_enough_quad(
arb_addr, time_quad_ex, cancel_delay, ci_env, spawn_backend
):
"""Verify we can cancel midway through the quad example and all actors
cancel gracefully.
"""
results, diff = time_quad_ex
delay = max(diff - cancel_delay, 0)
results = trio.run(cancel_after, delay, arb_addr)
system = platform.system()
if system in ('Windows', 'Darwin') and results is not None:
# In CI envoirments it seems later runs are quicker then the first
# so just ignore these
print(f"Woa there {system} caught your breath eh?")
else:
# should be cancelled mid-streaming
assert results is None
@tractor_test
async def test_respawn_consumer_task(
arb_addr,
spawn_backend,
loglevel,
):
"""Verify that ``._portal.ReceiveStream.shield()``
sucessfully protects the underlying IPC channel from being closed
when cancelling and respawning a consumer task.
This also serves to verify that all values from the stream can be
received despite the respawns.
"""
stream = None
async with tractor.open_nursery() as n:
portal = await n.start_actor(
name='streamer',
enable_modules=[__name__]
)
async with portal.open_stream_from(
stream_data,
seed=11,
) as stream:
expect = set(range(11))
received = []
# this is the re-spawn task routine
async def consume(task_status=trio.TASK_STATUS_IGNORED):
print('starting consume task..')
nonlocal stream
with trio.CancelScope() as cs:
task_status.started(cs)
# shield stream's underlying channel from cancellation
# with stream.shield():
async for v in stream:
print(f'from stream: {v}')
expect.remove(v)
received.append(v)
print('exited consume')
async with trio.open_nursery() as ln:
cs = await ln.start(consume)
while True:
await trio.sleep(0.1)
if received[-1] % 2 == 0:
print('cancelling consume task..')
cs.cancel()
# respawn
cs = await ln.start(consume)
if not expect:
print("all values streamed, BREAKING")
break
cs.cancel()
# TODO: this is justification for a
# ``ActorNursery.stream_from_actor()`` helper?
await portal.cancel_actor()

View File

@ -11,32 +11,26 @@ from conftest import tractor_test
@pytest.mark.trio @pytest.mark.trio
async def test_no_arbitter(): async def test_no_runtime():
"""An arbitter must be established before any nurseries """An arbitter must be established before any nurseries
can be created. can be created.
(In other words ``tractor.run`` must be used instead of ``trio.run`` as is (In other words ``tractor.open_root_actor()`` must be engaged at
done by the ``pytest-trio`` plugin.) some point?)
""" """
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError) :
with tractor.open_nursery(): async with tractor.find_actor('doggy'):
pass pass
def test_no_main():
"""An async function **must** be passed to ``tractor.run()``.
"""
with pytest.raises(TypeError):
tractor.run(None)
@tractor_test @tractor_test
async def test_self_is_registered(): async def test_self_is_registered(arb_addr):
"Verify waiting on the arbiter to register itself using the standard api." "Verify waiting on the arbiter to register itself using the standard api."
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_arbiter assert actor.is_arbiter
async with tractor.wait_for_actor('arbiter') as portal: with trio.fail_after(0.2):
assert portal.channel.uid[0] == 'arbiter' async with tractor.wait_for_actor('root') as portal:
assert portal.channel.uid[0] == 'root'
@tractor_test @tractor_test
@ -46,8 +40,11 @@ async def test_self_is_registered_localportal(arb_addr):
assert actor.is_arbiter assert actor.is_arbiter
async with tractor.get_arbiter(*arb_addr) as portal: async with tractor.get_arbiter(*arb_addr) as portal:
assert isinstance(portal, tractor._portal.LocalPortal) assert isinstance(portal, tractor._portal.LocalPortal)
sockaddr = await portal.run('self', 'wait_for_actor', name='arbiter')
assert sockaddr[0] == arb_addr with trio.fail_after(0.2):
sockaddr = await portal.run_from_ns(
'self', 'wait_for_actor', name='root')
assert sockaddr[0] == arb_addr
def test_local_actor_async_func(arb_addr): def test_local_actor_async_func(arb_addr):
@ -56,15 +53,19 @@ def test_local_actor_async_func(arb_addr):
nums = [] nums = []
async def print_loop(): async def print_loop():
# arbiter is started in-proc if dne
assert tractor.current_actor().is_arbiter
for i in range(10): async with tractor.open_root_actor(
nums.append(i) arbiter_addr=arb_addr,
await trio.sleep(0.1) ):
# arbiter is started in-proc if dne
assert tractor.current_actor().is_arbiter
for i in range(10):
nums.append(i)
await trio.sleep(0.1)
start = time.time() start = time.time()
tractor.run(print_loop, arbiter_addr=arb_addr) trio.run(print_loop)
# ensure the sleeps were actually awaited # ensure the sleeps were actually awaited
assert time.time() - start >= 1 assert time.time() - start >= 1

View File

@ -1,57 +1,30 @@
""" """
Multiple python programs invoking ``tractor.run()`` Multiple python programs invoking the runtime.
""" """
import sys import platform
import time import time
import signal
import subprocess
import pytest import pytest
import trio
import tractor import tractor
from conftest import tractor_test from conftest import (
tractor_test,
sig_prog,
def sig_prog(proc, sig): _INT_SIGNAL,
"Kill the actor-process with ``sig``." _INT_RETURN_CODE,
proc.send_signal(sig) )
time.sleep(0.1)
if not proc.poll():
# TODO: why sometimes does SIGINT not work on teardown?
# seems to happen only when trace logging enabled?
proc.send_signal(signal.SIGKILL)
ret = proc.wait()
assert ret
@pytest.fixture
def daemon(loglevel, testdir, arb_addr):
cmdargs = [
sys.executable, '-c',
"import tractor; tractor.run_daemon((), arbiter_addr={}, loglevel={})"
.format(
arb_addr,
"'{}'".format(loglevel) if loglevel else None)
]
proc = testdir.popen(
cmdargs,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
assert not proc.returncode
wait = 0.6 if sys.version_info < (3, 7) else 0.4
time.sleep(wait)
yield proc
sig_prog(proc, signal.SIGINT)
def test_abort_on_sigint(daemon): def test_abort_on_sigint(daemon):
assert daemon.returncode is None assert daemon.returncode is None
time.sleep(0.1) time.sleep(0.1)
sig_prog(daemon, signal.SIGINT) sig_prog(daemon, _INT_SIGNAL)
assert daemon.returncode == 1 assert daemon.returncode == _INT_RETURN_CODE
# XXX: oddly, couldn't get capfd.readouterr() to work here? # XXX: oddly, couldn't get capfd.readouterr() to work here?
assert "KeyboardInterrupt" in str(daemon.stderr.read()) if platform.system() != 'Windows':
# don't check stderr on windows as its empty when sending CTRL_C_EVENT
assert "KeyboardInterrupt" in str(daemon.stderr.read())
@tractor_test @tractor_test
@ -73,8 +46,13 @@ async def test_cancel_remote_arbiter(daemon, arb_addr):
def test_register_duplicate_name(daemon, arb_addr): def test_register_duplicate_name(daemon, arb_addr):
async def main(): async def main():
assert not tractor.current_actor().is_arbiter
async with tractor.open_nursery() as n: async with tractor.open_nursery(
arbiter_addr=arb_addr,
) as n:
assert not tractor.current_actor().is_arbiter
p1 = await n.start_actor('doggy') p1 = await n.start_actor('doggy')
p2 = await n.start_actor('doggy') p2 = await n.start_actor('doggy')
@ -85,4 +63,4 @@ def test_register_duplicate_name(daemon, arb_addr):
# run it manually since we want to start **after** # run it manually since we want to start **after**
# the other "daemon" program # the other "daemon" program
tractor.run(main, arbiter_addr=arb_addr) trio.run(main)

View File

@ -4,20 +4,22 @@ from itertools import cycle
import pytest import pytest
import trio import trio
import tractor import tractor
from tractor.testing import tractor_test from tractor.experimental import msgpub
from conftest import tractor_test
def test_type_checks(): def test_type_checks():
with pytest.raises(TypeError) as err: with pytest.raises(TypeError) as err:
@tractor.msg.pub @msgpub
async def no_get_topics(yo): async def no_get_topics(yo):
yield yield
assert "must define a `get_topics`" in str(err.value) assert "must define a `get_topics`" in str(err.value)
with pytest.raises(TypeError) as err: with pytest.raises(TypeError) as err:
@tractor.msg.pub @msgpub
def not_async_gen(yo): def not_async_gen(yo):
pass pass
@ -28,22 +30,27 @@ def is_even(i):
return i % 2 == 0 return i % 2 == 0
@tractor.msg.pub # placeholder for topics getter
_get_topics = None
@msgpub
async def pubber(get_topics, seed=10): async def pubber(get_topics, seed=10):
ss = tractor.current_actor().statespace
# ensure topic subscriptions are as expected
global _get_topics
_get_topics = get_topics
for i in cycle(range(seed)): for i in cycle(range(seed)):
# ensure topic subscriptions are as expected
ss['get_topics'] = get_topics
yield {'even' if is_even(i) else 'odd': i} yield {'even' if is_even(i) else 'odd': i}
await trio.sleep(0.1) await trio.sleep(0.1)
async def subs( async def subs(
which, pub_actor_name, seed=10, which,
portal=None, pub_actor_name,
seed=10,
task_status=trio.TASK_STATUS_IGNORED, task_status=trio.TASK_STATUS_IGNORED,
): ):
if len(which) == 1: if len(which) == 1:
@ -56,47 +63,49 @@ async def subs(
def pred(i): def pred(i):
return isinstance(i, int) return isinstance(i, int)
async with tractor.find_actor(pub_actor_name) as portal: # TODO: https://github.com/goodboy/tractor/issues/207
stream = await portal.run( async with tractor.wait_for_actor(pub_actor_name) as portal:
__name__, 'pubber', assert portal
async with portal.open_stream_from(
pubber,
topics=which, topics=which,
seed=seed, seed=seed,
) ) as stream:
task_status.started(stream) task_status.started(stream)
times = 10 times = 10
count = 0 count = 0
await stream.__anext__() await stream.__anext__()
async for pkt in stream:
for topic, value in pkt.items():
assert pred(value)
count += 1
if count >= times:
break
await stream.aclose()
stream = await portal.run(
__name__, 'pubber',
topics=['odd'],
seed=seed,
)
await stream.__anext__()
count = 0
# async with aclosing(stream) as stream:
try:
async for pkt in stream: async for pkt in stream:
for topic, value in pkt.items(): for topic, value in pkt.items():
pass assert pred(value)
# assert pred(value)
count += 1 count += 1
if count >= times: if count >= times:
break break
finally:
await stream.aclose() await stream.aclose()
async with portal.open_stream_from(
pubber,
topics=['odd'],
seed=seed,
) as stream:
await stream.__anext__()
count = 0
# async with aclosing(stream) as stream:
try:
async for pkt in stream:
for topic, value in pkt.items():
pass
# assert pred(value)
count += 1
if count >= times:
break
finally:
await stream.aclose()
@tractor.msg.pub(tasks=['one', 'two'])
@msgpub(tasks=['one', 'two'])
async def multilock_pubber(get_topics): async def multilock_pubber(get_topics):
yield {'doggy': 10} yield {'doggy': 10}
@ -124,17 +133,25 @@ async def test_required_args(callwith_expecterror):
await func(**kwargs) await func(**kwargs)
else: else:
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
# await func(**kwargs)
portal = await n.run_in_actor( portal = await n.start_actor(
'pubber', multilock_pubber, **kwargs) name='pubber',
enable_modules=[__name__],
)
async with tractor.wait_for_actor('pubber'): async with tractor.wait_for_actor('pubber'):
pass pass
await trio.sleep(0.5) await trio.sleep(0.5)
async for val in await portal.result(): async with portal.open_stream_from(
assert val == {'doggy': 10} multilock_pubber,
**kwargs
) as stream:
async for val in stream:
assert val == {'doggy': 10}
await portal.cancel_actor()
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -148,35 +165,49 @@ def test_multi_actor_subs_arbiter_pub(
): ):
"""Try out the neato @pub decorator system. """Try out the neato @pub decorator system.
""" """
global _get_topics
async def main(): async def main():
ss = tractor.current_actor().statespace
async with tractor.open_nursery() as n: async with tractor.open_nursery(
arbiter_addr=arb_addr,
enable_modules=[__name__],
) as n:
name = 'arbiter' name = 'root'
if pub_actor == 'streamer': if pub_actor == 'streamer':
# start the publisher as a daemon # start the publisher as a daemon
master_portal = await n.start_actor( master_portal = await n.start_actor(
'streamer', 'streamer',
rpc_module_paths=[__name__], enable_modules=[__name__],
) )
name = 'streamer'
even_portal = await n.run_in_actor( even_portal = await n.run_in_actor(
'evens', subs, which=['even'], pub_actor_name=name) subs,
which=['even'],
name='evens',
pub_actor_name=name
)
odd_portal = await n.run_in_actor( odd_portal = await n.run_in_actor(
'odds', subs, which=['odd'], pub_actor_name=name) subs,
which=['odd'],
name='odds',
pub_actor_name=name
)
async with tractor.wait_for_actor('evens'): async with tractor.wait_for_actor('evens'):
# block until 2nd actor is initialized # block until 2nd actor is initialized
pass pass
if pub_actor == 'arbiter': if pub_actor == 'arbiter':
# wait for publisher task to be spawned in a local RPC task # wait for publisher task to be spawned in a local RPC task
while not ss.get('get_topics'): while _get_topics is None:
await trio.sleep(0.1) await trio.sleep(0.1)
get_topics = ss.get('get_topics') get_topics = _get_topics
assert 'even' in get_topics() assert 'even' in get_topics()
@ -204,27 +235,22 @@ def test_multi_actor_subs_arbiter_pub(
await trio.sleep(0.5) await trio.sleep(0.5)
await even_portal.cancel_actor() await even_portal.cancel_actor()
await trio.sleep(0.5) await trio.sleep(1)
if pub_actor == 'arbiter': if pub_actor == 'arbiter':
assert 'even' not in get_topics() assert 'even' not in get_topics()
await odd_portal.cancel_actor() await odd_portal.cancel_actor()
await trio.sleep(1)
if pub_actor == 'arbiter': if pub_actor == 'arbiter':
while get_topics(): while get_topics():
await trio.sleep(0.1) await trio.sleep(0.1)
if time.time() - start > 1: if time.time() - start > 2:
pytest.fail("odds subscription never dropped?") pytest.fail("odds subscription never dropped?")
else: else:
await master_portal.cancel_actor() await master_portal.cancel_actor()
tractor.run( trio.run(main)
main,
arbiter_addr=arb_addr,
rpc_module_paths=[__name__],
)
def test_single_subactor_pub_multitask_subs( def test_single_subactor_pub_multitask_subs(
@ -233,11 +259,14 @@ def test_single_subactor_pub_multitask_subs(
): ):
async def main(): async def main():
async with tractor.open_nursery() as n: async with tractor.open_nursery(
arbiter_addr=arb_addr,
enable_modules=[__name__],
) as n:
portal = await n.start_actor( portal = await n.start_actor(
'streamer', 'streamer',
rpc_module_paths=[__name__], enable_modules=[__name__],
) )
async with tractor.wait_for_actor('streamer'): async with tractor.wait_for_actor('streamer'):
# block until 2nd actor is initialized # block until 2nd actor is initialized
@ -261,8 +290,4 @@ def test_single_subactor_pub_multitask_subs(
await portal.cancel_actor() await portal.cancel_actor()
tractor.run( trio.run(main)
main,
arbiter_addr=arb_addr,
rpc_module_paths=[__name__],
)

View File

@ -0,0 +1,182 @@
'''
Async context manager cache api testing: ``trionics.maybe_open_context():``
'''
from contextlib import asynccontextmanager as acm
import platform
from typing import Awaitable
import pytest
import trio
import tractor
_resource: int = 0
@acm
async def maybe_increment_counter(task_name: str):
global _resource
_resource += 1
await trio.lowlevel.checkpoint()
yield _resource
await trio.lowlevel.checkpoint()
_resource -= 1
@pytest.mark.parametrize(
'key_on',
['key_value', 'kwargs'],
ids="key_on={}".format,
)
def test_resource_only_entered_once(key_on):
global _resource
_resource = 0
kwargs = {}
key = None
if key_on == 'key_value':
key = 'some_common_key'
async def main():
cache_active: bool = False
async def enter_cached_mngr(name: str):
nonlocal cache_active
if key_on == 'kwargs':
# make a common kwargs input to key on it
kwargs = {'task_name': 'same_task_name'}
assert key is None
else:
# different task names per task will be used
kwargs = {'task_name': name}
async with tractor.trionics.maybe_open_context(
maybe_increment_counter,
kwargs=kwargs,
key=key,
) as (cache_hit, resource):
if cache_hit:
try:
cache_active = True
assert resource == 1
await trio.sleep_forever()
finally:
cache_active = False
else:
assert resource == 1
await trio.sleep_forever()
with trio.move_on_after(0.5):
async with (
tractor.open_root_actor(),
trio.open_nursery() as n,
):
for i in range(10):
n.start_soon(enter_cached_mngr, f'task_{i}')
await trio.sleep(0.001)
trio.run(main)
@tractor.context
async def streamer(
ctx: tractor.Context,
seq: list[int] = list(range(1000)),
) -> None:
await ctx.started()
async with ctx.open_stream() as stream:
for val in seq:
await stream.send(val)
await trio.sleep(0.001)
print('producer finished')
@acm
async def open_stream() -> Awaitable[tractor.MsgStream]:
async with tractor.open_nursery() as tn:
portal = await tn.start_actor('streamer', enable_modules=[__name__])
async with (
portal.open_context(streamer) as (ctx, first),
ctx.open_stream() as stream,
):
yield stream
await portal.cancel_actor()
print('CANCELLED STREAMER')
@acm
async def maybe_open_stream(taskname: str):
async with tractor.trionics.maybe_open_context(
# NOTE: all secondary tasks should cache hit on the same key
acm_func=open_stream,
) as (cache_hit, stream):
if cache_hit:
print(f'{taskname} loaded from cache')
# add a new broadcast subscription for the quote stream
# if this feed is already allocated by the first
# task that entereed
async with stream.subscribe() as bstream:
yield bstream
else:
# yield the actual stream
yield stream
def test_open_local_sub_to_stream():
'''
Verify a single inter-actor stream can can be fanned-out shared to
N local tasks using ``trionics.maybe_open_context():``.
'''
timeout = 3 if platform.system() != "Windows" else 10
async def main():
full = list(range(1000))
async def get_sub_and_pull(taskname: str):
async with (
maybe_open_stream(taskname) as stream,
):
if '0' in taskname:
assert isinstance(stream, tractor.MsgStream)
else:
assert isinstance(
stream,
tractor.trionics.BroadcastReceiver
)
first = await stream.receive()
print(f'{taskname} started with value {first}')
seq = []
async for msg in stream:
seq.append(msg)
assert set(seq).issubset(set(full))
print(f'{taskname} finished')
with trio.fail_after(timeout):
# TODO: turns out this isn't multi-task entrant XD
# We probably need an indepotent entry semantic?
async with tractor.open_root_actor():
async with (
trio.open_nursery() as nurse,
):
for i in range(10):
nurse.start_soon(get_sub_and_pull, f'task_{i}')
await trio.sleep(0.001)
print('all consumer tasks finished')
trio.run(main)

View File

@ -53,54 +53,58 @@ def test_rpc_errors(arb_addr, to_call, testdir):
exposed_mods, funcname, inside_err = to_call exposed_mods, funcname, inside_err = to_call
subactor_exposed_mods = [] subactor_exposed_mods = []
func_defined = globals().get(funcname, False) func_defined = globals().get(funcname, False)
subactor_requests_to = 'arbiter' subactor_requests_to = 'root'
remote_err = tractor.RemoteActorError remote_err = tractor.RemoteActorError
# remote module that fails at import time # remote module that fails at import time
if exposed_mods == ['tmp_mod']: if exposed_mods == ['tmp_mod']:
# create an importable module with a bad import # create an importable module with a bad import
testdir.syspathinsert() testdir.syspathinsert()
# module should cause raise a ModuleNotFoundError at import # module should raise a ModuleNotFoundError at import
testdir.makefile('.py', tmp_mod=funcname) testdir.makefile('.py', tmp_mod=funcname)
# no need to exposed module to the subactor # no need to expose module to the subactor
subactor_exposed_mods = exposed_mods subactor_exposed_mods = exposed_mods
exposed_mods = [] exposed_mods = []
func_defined = False func_defined = False
# subactor should not try to invoke anything # subactor should not try to invoke anything
subactor_requests_to = None subactor_requests_to = None
remote_err = trio.MultiError # the module will be attempted to be imported locally but will
# fail in the initial local instance of the actor
remote_err = inside_err
async def main(): async def main():
actor = tractor.current_actor()
assert actor.is_arbiter
# spawn a subactor which calls us back # spawn a subactor which calls us back
async with tractor.open_nursery() as n: async with tractor.open_nursery(
arbiter_addr=arb_addr,
enable_modules=exposed_mods.copy(),
) as n:
actor = tractor.current_actor()
assert actor.is_arbiter
await n.run_in_actor( await n.run_in_actor(
'subactor',
sleep_back_actor, sleep_back_actor,
actor_name=subactor_requests_to, actor_name=subactor_requests_to,
name='subactor',
# function from the local exposed module space # function from the local exposed module space
# the subactor will invoke when it RPCs back to this actor # the subactor will invoke when it RPCs back to this actor
func_name=funcname, func_name=funcname,
exposed_mods=exposed_mods, exposed_mods=exposed_mods,
func_defined=True if func_defined else False, func_defined=True if func_defined else False,
rpc_module_paths=subactor_exposed_mods, enable_modules=subactor_exposed_mods,
) )
def run(): def run():
tractor.run( trio.run(main)
main,
arbiter_addr=arb_addr,
rpc_module_paths=exposed_mods,
)
# handle both parameterized cases # handle both parameterized cases
if exposed_mods and func_defined: if exposed_mods and func_defined:
run() run()
else: else:
# underlying errors are propagated upwards (yet) # underlying errors aren't propagated upwards (yet)
with pytest.raises(remote_err) as err: with pytest.raises(remote_err) as err:
run() run()
@ -114,4 +118,5 @@ def test_rpc_errors(arb_addr, to_call, testdir):
value.exceptions value.exceptions
)) ))
assert value.type is inside_err if getattr(value, 'type', None):
assert value.type is inside_err

View File

@ -0,0 +1,73 @@
"""
Verifying internal runtime state and undocumented extras.
"""
import os
import pytest
import trio
import tractor
from conftest import tractor_test
_file_path: str = ''
def unlink_file():
print('Removing tmp file!')
os.remove(_file_path)
async def crash_and_clean_tmpdir(
tmp_file_path: str,
error: bool = True,
):
global _file_path
_file_path = tmp_file_path
actor = tractor.current_actor()
actor.lifetime_stack.callback(unlink_file)
assert os.path.isfile(tmp_file_path)
await trio.sleep(0.1)
if error:
assert 0
else:
actor.cancel_soon()
@pytest.mark.parametrize(
'error_in_child',
[True, False],
)
@tractor_test
async def test_lifetime_stack_wipes_tmpfile(
tmp_path,
error_in_child: bool,
):
child_tmp_file = tmp_path / "child.txt"
child_tmp_file.touch()
assert child_tmp_file.exists()
path = str(child_tmp_file)
try:
with trio.move_on_after(0.5):
async with tractor.open_nursery() as n:
await ( # inlined portal
await n.run_in_actor(
crash_and_clean_tmpdir,
tmp_file_path=path,
error=error_in_child,
)
).result()
except (
tractor.RemoteActorError,
tractor.BaseExceptionGroup,
):
pass
# tmp file should have been wiped by
# teardown stack.
assert not child_tmp_file.exists()

View File

@ -1,55 +1,71 @@
""" """
Spawning basics Spawning basics
""" """
from typing import Optional
import pytest
import trio import trio
import tractor import tractor
from conftest import tractor_test from conftest import tractor_test
statespace = {'doggy': 10, 'kitty': 4} data_to_pass_down = {'doggy': 10, 'kitty': 4}
async def spawn(is_arbiter): async def spawn(
is_arbiter: bool,
data: dict,
arb_addr: tuple[str, int],
):
namespaces = [__name__] namespaces = [__name__]
await trio.sleep(0.1) await trio.sleep(0.1)
actor = tractor.current_actor()
assert actor.is_arbiter == is_arbiter
assert actor.statespace == statespace
if actor.is_arbiter: async with tractor.open_root_actor(
async with tractor.open_nursery() as nursery: arbiter_addr=arb_addr,
# forks here ):
portal = await nursery.run_in_actor(
'sub-actor',
spawn,
is_arbiter=False,
statespace=statespace,
rpc_module_paths=namespaces,
)
assert len(nursery._children) == 1 actor = tractor.current_actor()
assert portal.channel.uid in tractor.current_actor()._peers assert actor.is_arbiter == is_arbiter
# be sure we can still get the result data = data_to_pass_down
result = await portal.result()
assert result == 10 if actor.is_arbiter:
return result
else: async with tractor.open_nursery(
return 10 ) as nursery:
# forks here
portal = await nursery.run_in_actor(
spawn,
is_arbiter=False,
name='sub-actor',
data=data,
arb_addr=arb_addr,
enable_modules=namespaces,
)
assert len(nursery._children) == 1
assert portal.channel.uid in tractor.current_actor()._peers
# be sure we can still get the result
result = await portal.result()
assert result == 10
return result
else:
return 10
def test_local_arbiter_subactor_global_state(arb_addr): def test_local_arbiter_subactor_global_state(arb_addr):
result = tractor.run( result = trio.run(
spawn, spawn,
True, True,
name='arbiter', data_to_pass_down,
statespace=statespace, arb_addr,
arbiter_addr=arb_addr,
) )
assert result == 10 assert result == 10
def movie_theatre_question(): async def movie_theatre_question():
"""A question asked in a dark theatre, in a tangent """A question asked in a dark theatre, in a tangent
(errr, I mean different) process. (errr, I mean different) process.
""" """
@ -57,7 +73,7 @@ def movie_theatre_question():
@tractor_test @tractor_test
async def test_movie_theatre_convo(): async def test_movie_theatre_convo(start_method):
"""The main ``tractor`` routine. """The main ``tractor`` routine.
""" """
async with tractor.open_nursery() as n: async with tractor.open_nursery() as n:
@ -65,12 +81,12 @@ async def test_movie_theatre_convo():
portal = await n.start_actor( portal = await n.start_actor(
'frank', 'frank',
# enable the actor to run funcs from this current module # enable the actor to run funcs from this current module
rpc_module_paths=[__name__], enable_modules=[__name__],
) )
print(await portal.run(__name__, 'movie_theatre_question')) print(await portal.run(movie_theatre_question))
# call the subactor a 2nd time # call the subactor a 2nd time
print(await portal.run(__name__, 'movie_theatre_question')) print(await portal.run(movie_theatre_question))
# the async with will block here indefinitely waiting # the async with will block here indefinitely waiting
# for our actor "frank" to complete, we cancel 'frank' # for our actor "frank" to complete, we cancel 'frank'
@ -78,19 +94,75 @@ async def test_movie_theatre_convo():
await portal.cancel_actor() await portal.cancel_actor()
def cellar_door(): async def cellar_door(return_value: Optional[str]):
return "Dang that's beautiful" return return_value
@pytest.mark.parametrize(
'return_value', ["Dang that's beautiful", None],
ids=['return_str', 'return_None'],
)
@tractor_test @tractor_test
async def test_most_beautiful_word(): async def test_most_beautiful_word(
"""The main ``tractor`` routine. start_method,
""" return_value
async with tractor.open_nursery() as n: ):
'''
The main ``tractor`` routine.
portal = await n.run_in_actor('some_linguist', cellar_door) '''
with trio.fail_after(1):
async with tractor.open_nursery() as n:
portal = await n.run_in_actor(
cellar_door,
return_value=return_value,
name='some_linguist',
)
print(await portal.result())
# The ``async with`` will unblock here since the 'some_linguist' # The ``async with`` will unblock here since the 'some_linguist'
# actor has completed its main task ``cellar_door``. # actor has completed its main task ``cellar_door``.
# this should pull the cached final result already captured during
# the nursery block exit.
print(await portal.result()) print(await portal.result())
async def check_loglevel(level):
assert tractor.current_actor().loglevel == level
log = tractor.log.get_logger()
# XXX using a level actually used inside tractor seems to trigger
# some kind of `logging` module bug FYI.
log.critical('yoyoyo')
def test_loglevel_propagated_to_subactor(
start_method,
capfd,
arb_addr,
):
if start_method == 'mp_forkserver':
pytest.skip(
"a bug with `capfd` seems to make forkserver capture not work?")
level = 'critical'
async def main():
async with tractor.open_nursery(
name='arbiter',
start_method=start_method,
arbiter_addr=arb_addr,
) as tn:
await tn.run_in_actor(
check_loglevel,
loglevel=level,
level=level,
)
trio.run(main)
# ensure subactor spits log message on stderr
captured = capfd.readouterr()
assert 'yoyoyo' in captured.err

View File

@ -1,189 +0,0 @@
"""
Streaming via async gen api
"""
import time
import trio
import tractor
import pytest
async def stream_seq(sequence):
for i in sequence:
yield i
await trio.sleep(0.1)
# block indefinitely waiting to be cancelled by ``aclose()`` call
with trio.CancelScope() as cs:
await trio.sleep(float('inf'))
assert 0
assert cs.cancelled_caught
async def stream_from_single_subactor():
"""Verify we can spawn a daemon actor and retrieve streamed data.
"""
async with tractor.find_actor('brokerd') as portals:
if not portals:
# only one per host address, spawns an actor if None
async with tractor.open_nursery() as nursery:
# no brokerd actor found
portal = await nursery.start_actor(
'streamerd',
rpc_module_paths=[__name__],
statespace={'global_dict': {}},
)
seq = range(10)
agen = await portal.run(
__name__,
'stream_seq', # the func above
sequence=list(seq), # has to be msgpack serializable
)
# it'd sure be nice to have an asyncitertools here...
iseq = iter(seq)
ival = next(iseq)
async for val in agen:
assert val == ival
try:
ival = next(iseq)
except StopIteration:
# should cancel far end task which will be
# caught and no error is raised
await agen.aclose()
await trio.sleep(0.3)
try:
await agen.__anext__()
except StopAsyncIteration:
# stop all spawned subactors
await portal.cancel_actor()
# await nursery.cancel()
def test_stream_from_single_subactor(arb_addr):
"""Verify streaming from a spawned async generator.
"""
tractor.run(stream_from_single_subactor, arbiter_addr=arb_addr)
# this is the first 2 actors, streamer_1 and streamer_2
async def stream_data(seed):
for i in range(seed):
yield i
await trio.sleep(0) # trigger scheduler
# this is the third actor; the aggregator
async def aggregate(seed):
"""Ensure that the two streams we receive match but only stream
a single set of values to the parent.
"""
async with tractor.open_nursery() as nursery:
portals = []
for i in range(1, 3):
# fork point
portal = await nursery.start_actor(
name=f'streamer_{i}',
rpc_module_paths=[__name__],
)
portals.append(portal)
send_chan, recv_chan = trio.open_memory_channel(500)
async def push_to_chan(portal):
async for value in await portal.run(
__name__, 'stream_data', seed=seed
):
# leverage trio's built-in backpressure
await send_chan.send(value)
await send_chan.send(None)
print(f"FINISHED ITERATING {portal.channel.uid}")
# spawn 2 trio tasks to collect streams and push to a local queue
async with trio.open_nursery() as n:
for portal in portals:
n.start_soon(push_to_chan, portal)
unique_vals = set()
async for value in recv_chan:
if value not in unique_vals:
unique_vals.add(value)
# yield upwards to the spawning parent actor
yield value
if value is None:
break
assert value in unique_vals
print("FINISHED ITERATING in aggregator")
await nursery.cancel()
print("WAITING on `ActorNursery` to finish")
print("AGGREGATOR COMPLETE!")
# this is the main actor and *arbiter*
async def a_quadruple_example():
# a nursery which spawns "actors"
async with tractor.open_nursery() as nursery:
seed = int(1e3)
pre_start = time.time()
portal = await nursery.run_in_actor(
'aggregator',
aggregate,
seed=seed,
)
start = time.time()
# the portal call returns exactly what you'd expect
# as if the remote "aggregate" function was called locally
result_stream = []
async for value in await portal.result():
result_stream.append(value)
print(f"STREAM TIME = {time.time() - start}")
print(f"STREAM + SPAWN TIME = {time.time() - pre_start}")
assert result_stream == list(range(seed)) + [None]
return result_stream
async def cancel_after(wait):
with trio.move_on_after(wait):
return await a_quadruple_example()
@pytest.fixture(scope='module')
def time_quad_ex(arb_addr):
start = time.time()
results = tractor.run(cancel_after, 3, arbiter_addr=arb_addr)
diff = time.time() - start
assert results
return results, diff
def test_a_quadruple_example(time_quad_ex):
"""This also serves as a kind of "we'd like to be this fast test"."""
results, diff = time_quad_ex
assert results
assert diff < 2.5
@pytest.mark.parametrize(
'cancel_delay',
list(map(lambda i: i/10, range(3, 9)))
)
def test_not_fast_enough_quad(arb_addr, time_quad_ex, cancel_delay):
"""Verify we can cancel midway through the quad example and all actors
cancel gracefully.
"""
results, diff = time_quad_ex
delay = max(diff - cancel_delay, 0)
results = tractor.run(cancel_after, delay, arbiter_addr=arb_addr)
assert results is None

View File

@ -0,0 +1,514 @@
"""
Broadcast channels for fan-out to local tasks.
"""
from contextlib import asynccontextmanager
from functools import partial
from itertools import cycle
import time
from typing import Optional
import pytest
import trio
from trio.lowlevel import current_task
import tractor
from tractor.trionics import (
broadcast_receiver,
Lagged,
)
@tractor.context
async def echo_sequences(
ctx: tractor.Context,
) -> None:
'''Bidir streaming endpoint which will stream
back any sequence it is sent item-wise.
'''
await ctx.started()
async with ctx.open_stream() as stream:
async for sequence in stream:
seq = list(sequence)
for value in seq:
await stream.send(value)
print(f'producer sent {value}')
async def ensure_sequence(
stream: tractor.MsgStream,
sequence: list,
delay: Optional[float] = None,
) -> None:
name = current_task().name
async with stream.subscribe() as bcaster:
assert not isinstance(bcaster, type(stream))
async for value in bcaster:
print(f'{name} rx: {value}')
assert value == sequence[0]
sequence.remove(value)
if delay:
await trio.sleep(delay)
if not sequence:
# fully consumed
break
@asynccontextmanager
async def open_sequence_streamer(
sequence: list[int],
arb_addr: tuple[str, int],
start_method: str,
) -> tractor.MsgStream:
async with tractor.open_nursery(
arbiter_addr=arb_addr,
start_method=start_method,
) as tn:
portal = await tn.start_actor(
'sequence_echoer',
enable_modules=[__name__],
)
async with portal.open_context(
echo_sequences,
) as (ctx, first):
assert first is None
async with ctx.open_stream(backpressure=True) as stream:
yield stream
await portal.cancel_actor()
def test_stream_fan_out_to_local_subscriptions(
arb_addr,
start_method,
):
sequence = list(range(1000))
async def main():
async with open_sequence_streamer(
sequence,
arb_addr,
start_method,
) as stream:
async with trio.open_nursery() as n:
for i in range(10):
n.start_soon(
ensure_sequence,
stream,
sequence.copy(),
name=f'consumer_{i}',
)
await stream.send(tuple(sequence))
async for value in stream:
print(f'source stream rx: {value}')
assert value == sequence[0]
sequence.remove(value)
if not sequence:
# fully consumed
break
trio.run(main)
@pytest.mark.parametrize(
'task_delays',
[
(0.01, 0.001),
(0.001, 0.01),
]
)
def test_consumer_and_parent_maybe_lag(
arb_addr,
start_method,
task_delays,
):
async def main():
sequence = list(range(300))
parent_delay, sub_delay = task_delays
async with open_sequence_streamer(
sequence,
arb_addr,
start_method,
) as stream:
try:
async with trio.open_nursery() as n:
n.start_soon(
ensure_sequence,
stream,
sequence.copy(),
sub_delay,
name='consumer_task',
)
await stream.send(tuple(sequence))
# async for value in stream:
lagged = False
lag_count = 0
while True:
try:
value = await stream.receive()
print(f'source stream rx: {value}')
if lagged:
# re set the sequence starting at our last
# value
sequence = sequence[sequence.index(value) + 1:]
else:
assert value == sequence[0]
sequence.remove(value)
lagged = False
except Lagged:
lagged = True
print(f'source stream lagged after {value}')
lag_count += 1
continue
# lag the parent
await trio.sleep(parent_delay)
if not sequence:
# fully consumed
break
print(f'parent + source stream lagged: {lag_count}')
if parent_delay > sub_delay:
assert lag_count > 0
except Lagged:
# child was lagged
assert parent_delay < sub_delay
trio.run(main)
def test_faster_task_to_recv_is_cancelled_by_slower(
arb_addr,
start_method,
):
'''
Ensure that if a faster task consuming from a stream is cancelled
the slower task can continue to receive all expected values.
'''
async def main():
sequence = list(range(1000))
async with open_sequence_streamer(
sequence,
arb_addr,
start_method,
) as stream:
async with trio.open_nursery() as n:
n.start_soon(
ensure_sequence,
stream,
sequence.copy(),
0,
name='consumer_task',
)
await stream.send(tuple(sequence))
# pull 3 values, cancel the subtask, then
# expect to be able to pull all values still
for i in range(20):
try:
value = await stream.receive()
print(f'source stream rx: {value}')
await trio.sleep(0.01)
except Lagged:
print(f'parent overrun after {value}')
continue
print('cancelling faster subtask')
n.cancel_scope.cancel()
try:
value = await stream.receive()
print(f'source stream after cancel: {value}')
except Lagged:
print(f'parent overrun after {value}')
# expect to see all remaining values
with trio.fail_after(0.5):
async for value in stream:
assert stream._broadcaster._state.recv_ready is None
print(f'source stream rx: {value}')
if value == 999:
# fully consumed and we missed no values once
# the faster subtask was cancelled
break
# await tractor.breakpoint()
# await stream.receive()
print(f'final value: {value}')
trio.run(main)
def test_subscribe_errors_after_close():
async def main():
size = 1
tx, rx = trio.open_memory_channel(size)
async with broadcast_receiver(rx, size) as brx:
pass
try:
# open and close
async with brx.subscribe():
pass
except trio.ClosedResourceError:
assert brx.key not in brx._state.subs
else:
assert 0
trio.run(main)
def test_ensure_slow_consumers_lag_out(
arb_addr,
start_method,
):
'''This is a pure local task test; no tractor
machinery is really required.
'''
async def main():
# make sure it all works within the runtime
async with tractor.open_root_actor():
num_laggers = 4
laggers: dict[str, int] = {}
retries = 3
size = 100
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
async def sub_and_print(
delay: float,
) -> None:
task = current_task()
start = time.time()
async with brx.subscribe() as lbrx:
while True:
print(f'{task.name}: starting consume loop')
try:
async for value in lbrx:
print(f'{task.name}: {value}')
await trio.sleep(delay)
if task.name == 'sub_1':
# trigger checkpoint to clean out other subs
await trio.sleep(0.01)
# the non-lagger got
# a ``trio.EndOfChannel``
# because the ``tx`` below was closed
assert len(lbrx._state.subs) == 1
await lbrx.aclose()
assert len(lbrx._state.subs) == 0
except trio.ClosedResourceError:
# only the fast sub will try to re-enter
# iteration on the now closed bcaster
assert task.name == 'sub_1'
return
except Lagged:
lag_time = time.time() - start
lags = laggers[task.name]
print(
f'restarting slow task {task.name} '
f'that bailed out on {lags}:{value} '
f'after {lag_time:.3f}')
if lags <= retries:
laggers[task.name] += 1
continue
else:
print(
f'{task.name} was too slow and terminated '
f'on {lags}:{value}')
return
async with trio.open_nursery() as nursery:
for i in range(1, num_laggers):
task_name = f'sub_{i}'
laggers[task_name] = 0
nursery.start_soon(
partial(
sub_and_print,
delay=i*0.001,
),
name=task_name,
)
# allow subs to sched
await trio.sleep(0.1)
async with tx:
for i in cycle(range(size)):
await tx.send(i)
if len(brx._state.subs) == 2:
# only one, the non lagger, sub is left
break
# the non-lagger
assert laggers.pop('sub_1') == 0
for n, v in laggers.items():
assert v == 4
assert tx._closed
assert not tx._state.open_send_channels
# check that "first" bcaster that we created
# above, never was iterated and is thus overrun
try:
await brx.receive()
except Lagged:
# expect tokio style index truncation
seq = brx._state.subs[brx.key]
assert seq == len(brx._state.queue) - 1
# all backpressured entries in the underlying
# channel should have been copied into the caster
# queue trailing-window
async for i in rx:
print(f'bped: {i}')
assert i in brx._state.queue
# should be noop
await brx.aclose()
trio.run(main)
def test_first_recver_is_cancelled():
async def main():
# make sure it all works within the runtime
async with tractor.open_root_actor():
tx, rx = trio.open_memory_channel(1)
brx = broadcast_receiver(rx, 1)
cs = trio.CancelScope()
async def sub_and_recv():
with cs:
async with brx.subscribe() as bc:
async for value in bc:
print(value)
async def cancel_and_send():
await trio.sleep(0.2)
cs.cancel()
await tx.send(1)
async with trio.open_nursery() as n:
n.start_soon(sub_and_recv)
await trio.sleep(0.1)
assert brx._state.recv_ready
n.start_soon(cancel_and_send)
# ensure that we don't hang because no-task is now
# waiting on the underlying receive..
with trio.fail_after(0.5):
value = await brx.receive()
print(f'parent: {value}')
assert value == 1
trio.run(main)
def test_no_raise_on_lag():
'''
Run a simple 2-task broadcast where one task is slow but configured
so that it does not raise `Lagged` on overruns using
`raise_on_lasg=False` and verify that the task does not raise.
'''
size = 100
tx, rx = trio.open_memory_channel(size)
brx = broadcast_receiver(rx, size)
async def slow():
async with brx.subscribe(
raise_on_lag=False,
) as br:
async for msg in br:
print(f'slow task got: {msg}')
await trio.sleep(0.1)
async def fast():
async with brx.subscribe() as br:
async for msg in br:
print(f'fast task got: {msg}')
async def main():
async with (
tractor.open_root_actor(
# NOTE: so we see the warning msg emitted by the bcaster
# internals when the no raise flag is set.
loglevel='warning',
),
trio.open_nursery() as n,
):
n.start_soon(slow)
n.start_soon(fast)
for i in range(1000):
await tx.send(i)
# simulate user nailing ctl-c after realizing
# there's a lag in the slow task.
await trio.sleep(1)
raise KeyboardInterrupt
with pytest.raises(KeyboardInterrupt):
trio.run(main)

View File

@ -0,0 +1,82 @@
'''
Reminders for oddities in `trio` that we need to stay aware of and/or
want to see changed.
'''
import pytest
import trio
from trio_typing import TaskStatus
@pytest.mark.parametrize(
'use_start_soon', [
pytest.param(
True,
marks=pytest.mark.xfail(reason="see python-trio/trio#2258")
),
False,
]
)
def test_stashed_child_nursery(use_start_soon):
_child_nursery = None
async def waits_on_signal(
ev: trio.Event(),
task_status: TaskStatus[trio.Nursery] = trio.TASK_STATUS_IGNORED,
):
'''
Do some stuf, then signal other tasks, then yield back to "starter".
'''
await ev.wait()
task_status.started()
async def mk_child_nursery(
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
):
'''
Allocate a child sub-nursery and stash it as a global.
'''
nonlocal _child_nursery
async with trio.open_nursery() as cn:
_child_nursery = cn
task_status.started(cn)
# block until cancelled by parent.
await trio.sleep_forever()
async def sleep_and_err(
ev: trio.Event,
task_status: TaskStatus = trio.TASK_STATUS_IGNORED,
):
await trio.sleep(0.5)
doggy() # noqa
ev.set()
task_status.started()
async def main():
async with (
trio.open_nursery() as pn,
):
cn = await pn.start(mk_child_nursery)
assert cn
ev = trio.Event()
if use_start_soon:
# this causes inf hang
cn.start_soon(sleep_and_err, ev)
else:
# this does not.
await cn.start(sleep_and_err, ev)
with trio.fail_after(1):
await cn.start(waits_on_signal, ev)
with pytest.raises(NameError):
trio.run(main)

View File

@ -1,119 +1,86 @@
""" # tractor: structured concurrent "actors".
tractor: An actor model micro-framework built on # Copyright 2018-eternity Tyler Goodlet.
``trio`` and ``multiprocessing``.
"""
import importlib
from functools import partial
from typing import Tuple, Any
import typing
import trio # type: ignore # This program is free software: you can redistribute it and/or modify
from trio import MultiError # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
from .log import get_console_log, get_logger, get_loglevel # This program is distributed in the hope that it will be useful,
from ._ipc import _connect_chan, Channel, Context # but WITHOUT ANY WARRANTY; without even the implied warranty of
from ._actor import ( # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
Actor, _start_actor, Arbiter, get_arbiter, find_actor, wait_for_actor # GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
tractor: structured concurrent "actors".
"""
from exceptiongroup import BaseExceptionGroup
from ._clustering import open_actor_cluster
from ._ipc import Channel
from ._streaming import (
Context,
MsgStream,
stream,
context,
)
from ._discovery import (
get_arbiter,
find_actor,
wait_for_actor,
query_actor,
)
from ._supervise import open_nursery
from ._state import (
current_actor,
is_root_process,
)
from ._exceptions import (
RemoteActorError,
ModuleNotExposed,
ContextCancelled,
)
from ._debug import (
breakpoint,
post_mortem,
) )
from ._trionics import open_nursery
from ._state import current_actor
from ._exceptions import RemoteActorError, ModuleNotExposed
from . import msg from . import msg
from ._root import (
run_daemon,
open_root_actor,
)
from ._portal import Portal
from ._runtime import Actor
__all__ = [ __all__ = [
'Actor',
'Channel',
'Context',
'ContextCancelled',
'ModuleNotExposed',
'MsgStream',
'BaseExceptionGroup',
'Portal',
'RemoteActorError',
'breakpoint',
'context',
'current_actor', 'current_actor',
'find_actor', 'find_actor',
'get_arbiter', 'get_arbiter',
'is_root_process',
'msg',
'open_actor_cluster',
'open_nursery', 'open_nursery',
'open_root_actor',
'post_mortem',
'query_actor',
'run_daemon',
'stream',
'to_asyncio',
'wait_for_actor', 'wait_for_actor',
'Channel',
'MultiError',
'RemoteActorError',
'ModuleNotExposed',
'msg'
] ]
# set at startup and after forks
_default_arbiter_host = '127.0.0.1'
_default_arbiter_port = 1616
async def _main(
async_fn: typing.Callable[..., typing.Awaitable],
args: Tuple,
kwargs: typing.Dict[str, typing.Any],
name: str,
arbiter_addr: Tuple[str, int]
) -> typing.Any:
"""Async entry point for ``tractor``.
"""
log = get_logger('tractor')
main = partial(async_fn, *args)
arbiter_addr = (host, port) = arbiter_addr or (
_default_arbiter_host, _default_arbiter_port)
get_console_log(kwargs.get('loglevel', get_loglevel()))
# make a temporary connection to see if an arbiter exists
arbiter_found = False
try:
async with _connect_chan(host, port):
arbiter_found = True
except OSError:
log.warning(f"No actor could be found @ {host}:{port}")
# create a local actor and start up its main routine/task
if arbiter_found: # we were able to connect to an arbiter
log.info(f"Arbiter seems to exist @ {host}:{port}")
actor = Actor(
name or 'anonymous',
arbiter_addr=arbiter_addr,
**kwargs
)
host, port = (host, 0)
else:
# start this local actor as the arbiter
actor = Arbiter(
name or 'arbiter', arbiter_addr=arbiter_addr, **kwargs)
# ``Actor._async_main()`` creates an internal nursery if one is not
# provided and thus blocks here until it's main task completes.
# Note that if the current actor is the arbiter it is desirable
# for it to stay up indefinitely until a re-election process has
# taken place - which is not implemented yet FYI).
return await _start_actor(
actor, main, host, port, arbiter_addr=arbiter_addr)
def run(
async_fn: typing.Callable[..., typing.Awaitable],
*args: Tuple,
name: str = None,
arbiter_addr: Tuple[str, int] = (
_default_arbiter_host, _default_arbiter_port),
**kwargs: typing.Dict[str, typing.Any],
) -> Any:
"""Run a trio-actor async function in process.
This is tractor's main entry and the start point for any async actor.
"""
return trio.run(_main, async_fn, args, kwargs, name, arbiter_addr)
def run_daemon(
rpc_module_paths: Tuple[str],
**kwargs
) -> None:
"""Spawn daemon actor which will respond to RPC.
This is a convenience wrapper around
``tractor.run(trio.sleep(float('inf')))`` such that the first actor spawned
is meant to run forever responding to RPC requests.
"""
kwargs['rpc_module_paths'] = rpc_module_paths
for path in rpc_module_paths:
importlib.import_module(path)
return run(partial(trio.sleep, float('inf')), **kwargs)

View File

@ -1,902 +0,0 @@
"""
Actor primitives and helpers
"""
from collections import defaultdict
from functools import partial
from itertools import chain
import importlib
import inspect
import uuid
import typing
from typing import Dict, List, Tuple, Any, Optional, Union
import trio # type: ignore
from async_generator import asynccontextmanager, aclosing
from ._ipc import Channel, _connect_chan, Context
from .log import get_console_log, get_logger
from ._exceptions import (
pack_error,
unpack_error,
ModuleNotExposed
)
from ._portal import (
Portal,
open_portal,
_do_handshake,
LocalPortal,
)
from . import _state
from ._state import current_actor
log = get_logger('tractor')
class ActorFailure(Exception):
"General actor failure"
async def _invoke(
actor: 'Actor',
cid: str,
chan: Channel,
func: typing.Callable,
kwargs: Dict[str, Any],
task_status=trio.TASK_STATUS_IGNORED
):
"""Invoke local func and return results over provided channel.
"""
sig = inspect.signature(func)
treat_as_gen = False
cs = None
ctx = Context(chan, cid)
if 'ctx' in sig.parameters:
kwargs['ctx'] = ctx
# TODO: eventually we want to be more stringent
# about what is considered a far-end async-generator.
# Right now both actual async gens and any async
# function which declares a `ctx` kwarg in its
# signature will be treated as one.
treat_as_gen = True
try:
is_async_partial = False
is_async_gen_partial = False
if isinstance(func, partial):
is_async_partial = inspect.iscoroutinefunction(func.func)
is_async_gen_partial = inspect.isasyncgenfunction(func.func)
if (
not inspect.iscoroutinefunction(func) and
not inspect.isasyncgenfunction(func) and
not is_async_partial and
not is_async_gen_partial
):
await chan.send({'functype': 'function', 'cid': cid})
with trio.CancelScope() as cs:
task_status.started(cs)
await chan.send({'return': func(**kwargs), 'cid': cid})
else:
coro = func(**kwargs)
if inspect.isasyncgen(coro):
await chan.send({'functype': 'asyncgen', 'cid': cid})
# XXX: massive gotcha! If the containing scope
# is cancelled and we execute the below line,
# any ``ActorNursery.__aexit__()`` WON'T be
# triggered in the underlying async gen! So we
# have to properly handle the closing (aclosing)
# of the async gen in order to be sure the cancel
# is propagated!
with trio.CancelScope() as cs:
task_status.started(cs)
async with aclosing(coro) as agen:
async for item in agen:
# TODO: can we send values back in here?
# it's gonna require a `while True:` and
# some non-blocking way to retrieve new `asend()`
# values from the channel:
# to_send = await chan.recv_nowait()
# if to_send is not None:
# to_yield = await coro.asend(to_send)
await chan.send({'yield': item, 'cid': cid})
log.debug(f"Finished iterating {coro}")
# TODO: we should really support a proper
# `StopAsyncIteration` system here for returning a final
# value if desired
await chan.send({'stop': True, 'cid': cid})
else:
if treat_as_gen:
await chan.send({'functype': 'asyncgen', 'cid': cid})
# XXX: the async-func may spawn further tasks which push
# back values like an async-generator would but must
# manualy construct the response dict-packet-responses as
# above
with trio.CancelScope() as cs:
task_status.started(cs)
await coro
if not cs.cancelled_caught:
# task was not cancelled so we can instruct the
# far end async gen to tear down
await chan.send({'stop': True, 'cid': cid})
else:
await chan.send({'functype': 'asyncfunction', 'cid': cid})
with trio.CancelScope() as cs:
task_status.started(cs)
await chan.send({'return': await coro, 'cid': cid})
except Exception as err:
# always ship errors back to caller
log.exception("Actor errored:")
err_msg = pack_error(err)
err_msg['cid'] = cid
try:
await chan.send(err_msg)
except trio.ClosedResourceError:
log.exception(
f"Failed to ship error to caller @ {chan.uid}")
if cs is None:
# error is from above code not from rpc invocation
task_status.started(err)
finally:
# RPC task bookeeping
try:
scope, func, is_complete = actor._rpc_tasks.pop((chan, cid))
is_complete.set()
except KeyError:
# If we're cancelled before the task returns then the
# cancel scope will not have been inserted yet
log.warn(
f"Task {func} was likely cancelled before it was started")
if not actor._rpc_tasks:
log.info(f"All RPC tasks have completed")
actor._no_more_rpc_tasks.set()
class Actor:
"""The fundamental concurrency primitive.
An *actor* is the combination of a regular Python or
``multiprocessing.Process`` executing a ``trio`` task tree, communicating
with other actors through "portals" which provide a native async API
around "channels".
"""
is_arbiter = False
def __init__(
self,
name: str,
rpc_module_paths: List[str] = [],
statespace: Optional[Dict[str, Any]] = None,
uid: str = None,
loglevel: str = None,
arbiter_addr: Optional[Tuple[str, int]] = None,
) -> None:
self.name = name
self.uid = (name, uid or str(uuid.uuid1()))
self.rpc_module_paths = rpc_module_paths
self._mods: dict = {}
# TODO: consider making this a dynamically defined
# @dataclass once we get py3.7
self.statespace = statespace or {}
self.loglevel = loglevel
self._arb_addr = arbiter_addr
# filled in by `_async_main` after fork
self._root_nursery: trio._core._run.Nursery = None
self._server_nursery: trio._core._run.Nursery = None
self._peers: defaultdict = defaultdict(list)
self._peer_connected: dict = {}
self._no_more_peers = trio.Event()
self._no_more_peers.set()
self._no_more_rpc_tasks = trio.Event()
self._no_more_rpc_tasks.set()
# (chan, cid) -> (cancel_scope, func)
self._rpc_tasks: Dict[
Tuple[Channel, str],
Tuple[trio._core._run.CancelScope, typing.Callable, trio.Event]
] = {}
# map {uids -> {callids -> waiter queues}}
self._cids2qs: Dict[
Tuple[Tuple[str, str], str],
trio.abc.SendChannel[Any]] = {}
self._listeners: List[trio.abc.Listener] = []
self._parent_chan: Optional[Channel] = None
self._forkserver_info: Optional[Tuple[Any, Any, Any, Any, Any]] = None
async def wait_for_peer(
self, uid: Tuple[str, str]
) -> Tuple[trio.Event, Channel]:
"""Wait for a connection back from a spawned actor with a given
``uid``.
"""
log.debug(f"Waiting for peer {uid} to connect")
event = self._peer_connected.setdefault(uid, trio.Event())
await event.wait()
log.debug(f"{uid} successfully connected back to us")
return event, self._peers[uid][-1]
def load_modules(self) -> None:
"""Load allowed RPC modules locally (after fork).
Since this actor may be spawned on a different machine from
the original nursery we need to try and load the local module
code (if it exists).
"""
for path in self.rpc_module_paths:
log.debug(f"Attempting to import {path}")
self._mods[path] = importlib.import_module(path)
def _get_rpc_func(self, ns, funcname):
try:
return getattr(self._mods[ns], funcname)
except KeyError as err:
raise ModuleNotExposed(*err.args)
async def _stream_handler(
self,
stream: trio.SocketStream,
) -> None:
"""Entry point for new inbound connections to the channel server.
"""
self._no_more_peers.clear()
chan = Channel(stream=stream)
log.info(f"New connection to us {chan}")
# send/receive initial handshake response
try:
uid = await _do_handshake(self, chan)
except StopAsyncIteration:
log.warning(f"Channel {chan} failed to handshake")
return
# channel tracking
event = self._peer_connected.pop(uid, None)
if event:
# Instructing connection: this is likely a new channel to
# a recently spawned actor which we'd like to control via
# async-rpc calls.
log.debug(f"Waking channel waiters {event.statistics()}")
# Alert any task waiting on this connection to come up
event.set()
chans = self._peers[uid]
if chans:
log.warning(
f"already have channel(s) for {uid}:{chans}?"
)
log.trace(f"Registered {chan} for {uid}") # type: ignore
# append new channel
self._peers[uid].append(chan)
# Begin channel management - respond to remote requests and
# process received reponses.
try:
await self._process_messages(chan)
finally:
# Drop ref to channel so it can be gc-ed and disconnected
log.debug(f"Releasing channel {chan} from {chan.uid}")
chans = self._peers.get(chan.uid)
chans.remove(chan)
if not chans:
log.debug(f"No more channels for {chan.uid}")
self._peers.pop(chan.uid, None)
log.debug(f"Peers is {self._peers}")
if not self._peers: # no more channels connected
self._no_more_peers.set()
log.debug(f"Signalling no more peer channels")
# # XXX: is this necessary (GC should do it?)
if chan.connected():
log.debug(f"Disconnecting channel {chan}")
try:
# send our msg loop terminate sentinel
await chan.send(None)
# await chan.aclose()
except trio.BrokenResourceError:
log.exception(
f"Channel for {chan.uid} was already zonked..")
async def _push_result(
self,
chan: Channel,
msg: Dict[str, Any],
) -> None:
"""Push an RPC result to the local consumer's queue.
"""
actorid = chan.uid
assert actorid, f"`actorid` can't be {actorid}"
cid = msg['cid']
send_chan = self._cids2qs[(actorid, cid)]
assert send_chan.cid == cid
if 'stop' in msg:
log.debug(f"{send_chan} was terminated at remote end")
return await send_chan.aclose()
try:
log.debug(f"Delivering {msg} from {actorid} to caller {cid}")
# maintain backpressure
await send_chan.send(msg)
except trio.BrokenResourceError:
# XXX: local consumer has closed their side
# so cancel the far end streaming task
log.warning(f"{send_chan} consumer is already closed")
def get_memchans(
self,
actorid: Tuple[str, str],
cid: str
) -> trio.abc.ReceiveChannel:
log.debug(f"Getting result queue for {actorid} cid {cid}")
try:
recv_chan = self._cids2qs[(actorid, cid)]
except KeyError:
send_chan, recv_chan = trio.open_memory_channel(1000)
send_chan.cid = cid
self._cids2qs[(actorid, cid)] = send_chan
return recv_chan
async def send_cmd(
self,
chan: Channel,
ns: str,
func: str,
kwargs: dict
) -> Tuple[str, trio.abc.ReceiveChannel]:
"""Send a ``'cmd'`` message to a remote actor and return a
caller id and a ``trio.Queue`` that can be used to wait for
responses delivered by the local message processing loop.
"""
cid = str(uuid.uuid1())
assert chan.uid
recv_chan = self.get_memchans(chan.uid, cid)
log.debug(f"Sending cmd to {chan.uid}: {ns}.{func}({kwargs})")
await chan.send({'cmd': (ns, func, kwargs, self.uid, cid)})
return cid, recv_chan
async def _process_messages(
self, chan: Channel,
treat_as_gen: bool = False,
shield: bool = False,
task_status=trio.TASK_STATUS_IGNORED,
) -> None:
"""Process messages for the channel async-RPC style.
Receive multiplexed RPC requests and deliver responses over ``chan``.
"""
# TODO: once https://github.com/python-trio/trio/issues/467 gets
# worked out we'll likely want to use that!
msg = None
log.debug(f"Entering msg loop for {chan} from {chan.uid}")
try:
# internal scope allows for keeping this message
# loop running despite the current task having been
# cancelled (eg. `open_portal()` may call this method from
# a locally spawned task)
with trio.CancelScope(shield=shield) as cs:
task_status.started(cs)
async for msg in chan:
if msg is None: # loop terminate sentinel
log.debug(
f"Cancelling all tasks for {chan} from {chan.uid}")
for (channel, cid) in self._rpc_tasks:
if channel is chan:
self.cancel_task(cid, Context(channel, cid))
log.debug(
f"Msg loop signalled to terminate for"
f" {chan} from {chan.uid}")
break
log.trace(f"Received msg {msg} from {chan.uid}") # type: ignore
if msg.get('cid'):
# deliver response to local caller/waiter
await self._push_result(chan, msg)
log.debug(
f"Waiting on next msg for {chan} from {chan.uid}")
continue
# process command request
try:
ns, funcname, kwargs, actorid, cid = msg['cmd']
except KeyError:
# This is the non-rpc error case, that is, an
# error **not** raised inside a call to ``_invoke()``
# (i.e. no cid was provided in the msg - see above).
# Push this error to all local channel consumers
# (normally portals) by marking the channel as errored
assert chan.uid
exc = unpack_error(msg, chan=chan)
chan._exc = exc
raise exc
log.debug(
f"Processing request from {actorid}\n"
f"{ns}.{funcname}({kwargs})")
if ns == 'self':
func = getattr(self, funcname)
else:
# complain to client about restricted modules
try:
func = self._get_rpc_func(ns, funcname)
except (ModuleNotExposed, AttributeError) as err:
err_msg = pack_error(err)
err_msg['cid'] = cid
await chan.send(err_msg)
continue
# spin up a task for the requested function
log.debug(f"Spawning task for {func}")
cs = await self._root_nursery.start(
_invoke, self, cid, chan, func, kwargs,
name=funcname
)
# never allow cancelling cancel requests (results in
# deadlock and other weird behaviour)
if func != self.cancel:
if isinstance(cs, Exception):
log.warn(f"Task for RPC func {func} failed with"
f"{cs}")
else:
# mark that we have ongoing rpc tasks
self._no_more_rpc_tasks.clear()
log.info(f"RPC func is {func}")
# store cancel scope such that the rpc task can be
# cancelled gracefully if requested
self._rpc_tasks[(chan, cid)] = (
cs, func, trio.Event())
log.debug(
f"Waiting on next msg for {chan} from {chan.uid}")
else:
# channel disconnect
log.debug(f"{chan} from {chan.uid} disconnected")
except trio.ClosedResourceError:
log.error(f"{chan} form {chan.uid} broke")
except Exception as err:
# ship any "internal" exception (i.e. one from internal machinery
# not from an rpc task) to parent
log.exception("Actor errored:")
if self._parent_chan:
await self._parent_chan.send(pack_error(err))
raise
# if this is the `MainProcess` we expect the error broadcasting
# above to trigger an error at consuming portal "checkpoints"
except trio.Cancelled:
# debugging only
log.debug("Msg loop was cancelled")
raise
finally:
log.debug(
f"Exiting msg loop for {chan} from {chan.uid} "
f"with last msg:\n{msg}")
def _fork_main(
self,
accept_addr: Tuple[str, int],
forkserver_info: Tuple[Any, Any, Any, Any, Any],
parent_addr: Tuple[str, int] = None
) -> None:
"""The routine called *after fork* which invokes a fresh ``trio.run``
"""
self._forkserver_info = forkserver_info
from ._trionics import ctx
if self.loglevel is not None:
log.info(
f"Setting loglevel for {self.uid} to {self.loglevel}")
get_console_log(self.loglevel)
log.info(
f"Started new {ctx.current_process()} for {self.uid}")
_state._current_actor = self
log.debug(f"parent_addr is {parent_addr}")
try:
trio.run(partial(
self._async_main, accept_addr, parent_addr=parent_addr))
except KeyboardInterrupt:
pass # handle it the same way trio does?
log.info(f"Actor {self.uid} terminated")
async def _async_main(
self,
accept_addr: Tuple[str, int],
arbiter_addr: Optional[Tuple[str, int]] = None,
parent_addr: Optional[Tuple[str, int]] = None,
task_status: trio._core._run._TaskStatus = trio.TASK_STATUS_IGNORED,
) -> None:
"""Start the channel server, maybe connect back to the parent, and
start the main task.
A "root-most" (or "top-level") nursery for this actor is opened here
and when cancelled effectively cancels the actor.
"""
arbiter_addr = arbiter_addr or self._arb_addr
registered_with_arbiter = False
try:
async with trio.open_nursery() as nursery:
self._root_nursery = nursery
# Startup up channel server
host, port = accept_addr
await nursery.start(partial(
self._serve_forever, accept_host=host, accept_port=port)
)
if parent_addr is not None:
try:
# Connect back to the parent actor and conduct initial
# handshake (From this point on if we error, ship the
# exception back to the parent actor)
chan = self._parent_chan = Channel(
destaddr=parent_addr,
)
await chan.connect()
# initial handshake, report who we are, who they are
await _do_handshake(self, chan)
except OSError: # failed to connect
log.warning(
f"Failed to connect to parent @ {parent_addr},"
" closing server")
await self.cancel()
self._parent_chan = None
# handle new connection back to parent
nursery.start_soon(
self._process_messages, self._parent_chan)
# load exposed/allowed RPC modules
# XXX: do this **after** establishing connection to parent
# so that import errors are properly propagated upwards
self.load_modules()
# register with the arbiter if we're told its addr
log.debug(f"Registering {self} for role `{self.name}`")
async with get_arbiter(*arbiter_addr) as arb_portal:
await arb_portal.run(
'self', 'register_actor',
uid=self.uid, sockaddr=self.accept_addr)
registered_with_arbiter = True
task_status.started()
log.debug("Waiting on root nursery to complete")
# blocks here as expected until the channel server is
# killed (i.e. this actor is cancelled or signalled by the parent)
except Exception as err:
if not registered_with_arbiter:
log.exception(
f"Actor errored and failed to register with arbiter "
f"@ {arbiter_addr}")
if self._parent_chan:
try:
# internal error so ship to parent without cid
await self._parent_chan.send(pack_error(err))
except trio.ClosedResourceError:
log.error(
f"Failed to ship error to parent "
f"{self._parent_chan.uid}, channel was closed")
log.exception("Actor errored:")
else:
# XXX wait, why?
# causes a hang if I always raise..
raise
finally:
if registered_with_arbiter:
await self._do_unreg(arbiter_addr)
# terminate actor once all it's peers (actors that connected
# to it as clients) have disappeared
if not self._no_more_peers.is_set():
if any(
chan.connected() for chan in chain(*self._peers.values())
):
log.debug(
f"Waiting for remaining peers {self._peers} to clear")
await self._no_more_peers.wait()
log.debug(f"All peer channels are complete")
# tear down channel server no matter what since we errored
# or completed
self.cancel_server()
async def _serve_forever(
self,
*,
# (host, port) to bind for channel server
accept_host: Tuple[str, int] = None,
accept_port: int = 0,
task_status: trio._core._run._TaskStatus = trio.TASK_STATUS_IGNORED,
) -> None:
"""Start the channel server, begin listening for new connections.
This will cause an actor to continue living (blocking) until
``cancel_server()`` is called.
"""
async with trio.open_nursery() as nursery:
self._server_nursery = nursery
# TODO: might want to consider having a separate nursery
# for the stream handler such that the server can be cancelled
# whilst leaving existing channels up
listeners = await nursery.start(
partial(
trio.serve_tcp,
self._stream_handler,
# new connections will stay alive even if this server
# is cancelled
handler_nursery=self._root_nursery,
port=accept_port, host=accept_host,
)
)
log.debug(
f"Started tcp server(s) on {[l.socket for l in listeners]}")
self._listeners.extend(listeners)
task_status.started()
async def _do_unreg(self, arbiter_addr: Optional[Tuple[str, int]]) -> None:
# UNregister actor from the arbiter
try:
if arbiter_addr is not None:
async with get_arbiter(*arbiter_addr) as arb_portal:
await arb_portal.run(
'self', 'unregister_actor', uid=self.uid)
except OSError:
log.warning(f"Unable to unregister {self.name} from arbiter")
async def cancel(self) -> None:
"""Cancel this actor.
The sequence in order is:
- cancelling all rpc tasks
- cancelling the channel server
- cancel the "root" nursery
"""
# cancel all ongoing rpc tasks
await self.cancel_rpc_tasks()
self.cancel_server()
self._root_nursery.cancel_scope.cancel()
async def cancel_task(self, cid, ctx):
"""Cancel a local task.
Note this method will be treated as a streaming funciton
by remote actor-callers due to the declaration of ``ctx``
in the signature (for now).
"""
# right now this is only implicitly called by
# streaming IPC but it should be called
# to cancel any remotely spawned task
chan = ctx.chan
try:
# this ctx based lookup ensures the requested task to
# be cancelled was indeed spawned by a request from this channel
scope, func, is_complete = self._rpc_tasks[(ctx.chan, cid)]
except KeyError:
log.warning(f"{cid} has already completed/terminated?")
return
log.debug(
f"Cancelling task:\ncid: {cid}\nfunc: {func}\n"
f"peer: {chan.uid}\n")
# don't allow cancelling this function mid-execution
# (is this necessary?)
if func is self.cancel_task:
return
scope.cancel()
# wait for _invoke to mark the task complete
await is_complete.wait()
log.debug(
f"Sucessfully cancelled task:\ncid: {cid}\nfunc: {func}\n"
f"peer: {chan.uid}\n")
async def cancel_rpc_tasks(self) -> None:
"""Cancel all existing RPC responder tasks using the cancel scope
registered for each.
"""
tasks = self._rpc_tasks
log.info(f"Cancelling all {len(tasks)} rpc tasks:\n{tasks} ")
for (chan, cid) in tasks.copy():
# TODO: this should really done in a nursery batch
await self.cancel_task(cid, Context(chan, cid))
# if tasks:
log.info(
f"Waiting for remaining rpc tasks to complete {tasks}")
await self._no_more_rpc_tasks.wait()
def cancel_server(self) -> None:
"""Cancel the internal channel server nursery thereby
preventing any new inbound connections from being established.
"""
log.debug("Shutting down channel server")
self._server_nursery.cancel_scope.cancel()
@property
def accept_addr(self) -> Optional[Tuple[str, int]]:
"""Primary address to which the channel server is bound.
"""
try:
return self._listeners[0].socket.getsockname()
except OSError:
return None
def get_parent(self) -> Portal:
"""Return a portal to our parent actor."""
assert self._parent_chan, "No parent channel for this actor?"
return Portal(self._parent_chan)
def get_chans(self, uid: Tuple[str, str]) -> List[Channel]:
"""Return all channels to the actor with provided uid."""
return self._peers[uid]
class Arbiter(Actor):
"""A special actor who knows all the other actors and always has
access to a top level nursery.
The arbiter is by default the first actor spawned on each host
and is responsible for keeping track of all other actors for
coordination purposes. If a new main process is launched and an
arbiter is already running that arbiter will be used.
"""
is_arbiter = True
def __init__(self, *args, **kwargs):
self._registry = defaultdict(list)
self._waiters = {}
super().__init__(*args, **kwargs)
def find_actor(self, name: str) -> Optional[Tuple[str, int]]:
for uid, sockaddr in self._registry.items():
if name in uid:
return sockaddr
return None
async def wait_for_actor(
self, name: str
) -> List[Tuple[str, int]]:
"""Wait for a particular actor to register.
This is a blocking call if no actor by the provided name is currently
registered.
"""
sockaddrs = []
for (aname, _), sockaddr in self._registry.items():
if name == aname:
sockaddrs.append(sockaddr)
if not sockaddrs:
waiter = trio.Event()
self._waiters.setdefault(name, []).append(waiter)
await waiter.wait()
for uid in self._waiters[name]:
sockaddrs.append(self._registry[uid])
return sockaddrs
def register_actor(
self, uid: Tuple[str, str], sockaddr: Tuple[str, int]
) -> None:
name, uuid = uid
self._registry[uid] = sockaddr
# pop and signal all waiter events
events = self._waiters.pop(name, ())
self._waiters.setdefault(name, []).append(uid)
for event in events:
if isinstance(event, trio.Event):
event.set()
def unregister_actor(self, uid: Tuple[str, str]) -> None:
self._registry.pop(uid, None)
async def _start_actor(
actor: Actor,
main: typing.Callable[..., typing.Awaitable],
host: str,
port: int,
arbiter_addr: Tuple[str, int],
nursery: trio._core._run.Nursery = None
):
"""Spawn a local actor by starting a task to execute it's main async
function.
Blocks if no nursery is provided, in which case it is expected the nursery
provider is responsible for waiting on the task to complete.
"""
# assign process-local actor
_state._current_actor = actor
# start local channel-server and fake the portal API
# NOTE: this won't block since we provide the nursery
log.info(f"Starting local {actor} @ {host}:{port}")
async with trio.open_nursery() as nursery:
await nursery.start(
partial(
actor._async_main,
accept_addr=(host, port),
parent_addr=None,
arbiter_addr=arbiter_addr,
)
)
result = await main()
# XXX: the actor is cancelled when this context is complete
# given that there are no more active peer channels connected
actor.cancel_server()
# unset module state
_state._current_actor = None
log.info("Completed async main")
return result
@asynccontextmanager
async def get_arbiter(
host: str, port: int
) -> typing.AsyncGenerator[Union[Portal, LocalPortal], None]:
"""Return a portal instance connected to a local or remote
arbiter.
"""
actor = current_actor()
if not actor:
raise RuntimeError("No actor instance has been defined yet?")
if actor.is_arbiter:
# we're already the arbiter
# (likely a re-entrant call from the arbiter actor)
yield LocalPortal(actor)
else:
async with _connect_chan(host, port) as chan:
async with open_portal(chan) as arb_portal:
yield arb_portal
@asynccontextmanager
async def find_actor(
name: str, arbiter_sockaddr: Tuple[str, int] = None
) -> typing.AsyncGenerator[Optional[Portal], None]:
"""Ask the arbiter to find actor(s) by name.
Returns a connected portal to the last registered matching actor
known to the arbiter.
"""
actor = current_actor()
async with get_arbiter(*arbiter_sockaddr or actor._arb_addr) as arb_portal:
sockaddr = await arb_portal.run('self', 'find_actor', name=name)
# TODO: return portals to all available actors - for now just
# the last one that registered
if name == 'arbiter' and actor.is_arbiter:
raise RuntimeError("The current actor is the arbiter")
elif sockaddr:
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal
else:
yield None
@asynccontextmanager
async def wait_for_actor(
name: str,
arbiter_sockaddr: Tuple[str, int] = None
) -> typing.AsyncGenerator[Portal, None]:
"""Wait on an actor to register with the arbiter.
A portal to the first registered actor is returned.
"""
actor = current_actor()
async with get_arbiter(*arbiter_sockaddr or actor._arb_addr) as arb_portal:
sockaddrs = await arb_portal.run('self', 'wait_for_actor', name=name)
sockaddr = sockaddrs[-1]
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal

62
tractor/_child.py 100644
View File

@ -0,0 +1,62 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
This is the "bootloader" for actors started using the native trio backend.
"""
import sys
import trio
import argparse
from ast import literal_eval
from ._runtime import Actor
from ._entry import _trio_main
def parse_uid(arg):
name, uuid = literal_eval(arg) # ensure 2 elements
return str(name), str(uuid) # ensures str encoding
def parse_ipaddr(arg):
host, port = literal_eval(arg)
return (str(host), int(port))
from ._entry import _trio_main
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--uid", type=parse_uid)
parser.add_argument("--loglevel", type=str)
parser.add_argument("--parent_addr", type=parse_ipaddr)
parser.add_argument("--asyncio", action='store_true')
args = parser.parse_args()
subactor = Actor(
args.uid[0],
uid=args.uid[1],
loglevel=args.loglevel,
spawn_method="trio"
)
_trio_main(
subactor,
parent_addr=args.parent_addr,
infect_asyncio=args.asyncio,
)

View File

@ -0,0 +1,74 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Actor cluster helpers.
'''
from __future__ import annotations
from contextlib import asynccontextmanager as acm
from multiprocessing import cpu_count
from typing import AsyncGenerator, Optional
import trio
import tractor
@acm
async def open_actor_cluster(
modules: list[str],
count: int = cpu_count(),
names: list[str] | None = None,
hard_kill: bool = False,
# passed through verbatim to ``open_root_actor()``
**runtime_kwargs,
) -> AsyncGenerator[
dict[str, tractor.Portal],
None,
]:
portals: dict[str, tractor.Portal] = {}
if not names:
names = [f'worker_{i}' for i in range(count)]
if not len(names) == count:
raise ValueError(
'Number of names is {len(names)} but count it {count}')
async with tractor.open_nursery(
**runtime_kwargs,
) as an:
async with trio.open_nursery() as n:
uid = tractor.current_actor().uid
async def _start(name: str) -> None:
name = f'{uid[0]}.{name}'
portals[name] = await an.start_actor(
enable_modules=modules,
name=name,
)
for name in names:
n.start_soon(_start, name)
assert len(portals) == count
yield portals
await an.cancel(hard_kill=hard_kill)

922
tractor/_debug.py 100644
View File

@ -0,0 +1,922 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Multi-core debugging for da peeps!
"""
from __future__ import annotations
import bdb
import os
import sys
import signal
from functools import (
partial,
cached_property,
)
from contextlib import asynccontextmanager as acm
from typing import (
Any,
Optional,
Callable,
AsyncIterator,
AsyncGenerator,
)
from types import FrameType
import pdbp
import tractor
import trio
from trio_typing import TaskStatus
from .log import get_logger
from ._discovery import get_root
from ._state import (
is_root_process,
debug_mode,
)
from ._exceptions import (
is_multi_cancelled,
ContextCancelled,
)
from ._ipc import Channel
log = get_logger(__name__)
__all__ = ['breakpoint', 'post_mortem']
class Lock:
'''
Actor global debug lock state.
Mostly to avoid a lot of ``global`` declarations for now XD.
'''
repl: MultiActorPdb | None = None
# placeholder for function to set a ``trio.Event`` on debugger exit
# pdb_release_hook: Optional[Callable] = None
_trio_handler: Callable[
[int, Optional[FrameType]], Any
] | int | None = None
# actor-wide variable pointing to current task name using debugger
local_task_in_debug: str | None = None
# NOTE: set by the current task waiting on the root tty lock from
# the CALLER side of the `lock_tty_for_child()` context entry-call
# and must be cancelled if this actor is cancelled via IPC
# request-message otherwise deadlocks with the parent actor may
# ensure
_debugger_request_cs: Optional[trio.CancelScope] = None
# NOTE: set only in the root actor for the **local** root spawned task
# which has acquired the lock (i.e. this is on the callee side of
# the `lock_tty_for_child()` context entry).
_root_local_task_cs_in_debug: Optional[trio.CancelScope] = None
# actor tree-wide actor uid that supposedly has the tty lock
global_actor_in_debug: Optional[tuple[str, str]] = None
local_pdb_complete: Optional[trio.Event] = None
no_remote_has_tty: Optional[trio.Event] = None
# lock in root actor preventing multi-access to local tty
_debug_lock: trio.StrictFIFOLock = trio.StrictFIFOLock()
_orig_sigint_handler: Optional[Callable] = None
_blocked: set[tuple[str, str]] = set()
@classmethod
def shield_sigint(cls):
cls._orig_sigint_handler = signal.signal(
signal.SIGINT,
shield_sigint_handler,
)
@classmethod
def unshield_sigint(cls):
# always restore ``trio``'s sigint handler. see notes below in
# the pdb factory about the nightmare that is that code swapping
# out the handler when the repl activates...
signal.signal(signal.SIGINT, cls._trio_handler)
cls._orig_sigint_handler = None
@classmethod
def release(cls):
try:
cls._debug_lock.release()
except RuntimeError:
# uhhh makes no sense but been seeing the non-owner
# release error even though this is definitely the task
# that locked?
owner = cls._debug_lock.statistics().owner
if owner:
raise
# actor-local state, irrelevant for non-root.
cls.global_actor_in_debug = None
cls.local_task_in_debug = None
try:
# sometimes the ``trio`` might already be terminated in
# which case this call will raise.
if cls.local_pdb_complete is not None:
cls.local_pdb_complete.set()
finally:
# restore original sigint handler
cls.unshield_sigint()
cls.repl = None
class TractorConfig(pdbp.DefaultConfig):
'''
Custom ``pdbp`` goodness :surfer:
'''
use_pygments: bool = True
sticky_by_default: bool = False
enable_hidden_frames: bool = False
# much thanks @mdmintz for the hot tip!
# fixes line spacing issue when resizing terminal B)
truncate_long_lines: bool = False
class MultiActorPdb(pdbp.Pdb):
'''
Add teardown hooks to the regular ``pdbp.Pdb``.
'''
# override the pdbp config with our coolio one
DefaultConfig = TractorConfig
# def preloop(self):
# print('IN PRELOOP')
# super().preloop()
# TODO: figure out how to disallow recursive .set_trace() entry
# since that'll cause deadlock for us.
def set_continue(self):
try:
super().set_continue()
finally:
Lock.release()
def set_quit(self):
try:
super().set_quit()
finally:
Lock.release()
# XXX NOTE: we only override this because apparently the stdlib pdb
# bois likes to touch the SIGINT handler as much as i like to touch
# my d$%&.
def _cmdloop(self):
self.cmdloop()
@cached_property
def shname(self) -> str | None:
'''
Attempt to return the login shell name with a special check for
the infamous `xonsh` since it seems to have some issues much
different from std shells when it comes to flushing the prompt?
'''
# SUPER HACKY and only really works if `xonsh` is not used
# before spawning further sub-shells..
shpath = os.getenv('SHELL', None)
if shpath:
if (
os.getenv('XONSH_LOGIN', default=False)
or 'xonsh' in shpath
):
return 'xonsh'
return os.path.basename(shpath)
return None
@acm
async def _acquire_debug_lock_from_root_task(
uid: tuple[str, str]
) -> AsyncIterator[trio.StrictFIFOLock]:
'''
Acquire a root-actor local FIFO lock which tracks mutex access of
the process tree's global debugger breakpoint.
This lock avoids tty clobbering (by preventing multiple processes
reading from stdstreams) and ensures multi-actor, sequential access
to the ``pdb`` repl.
'''
task_name = trio.lowlevel.current_task().name
log.runtime(
f"Attempting to acquire TTY lock, remote task: {task_name}:{uid}"
)
we_acquired = False
try:
log.runtime(
f"entering lock checkpoint, remote task: {task_name}:{uid}"
)
we_acquired = True
# NOTE: if the surrounding cancel scope from the
# `lock_tty_for_child()` caller is cancelled, this line should
# unblock and NOT leave us in some kind of
# a "child-locked-TTY-but-child-is-uncontactable-over-IPC"
# condition.
await Lock._debug_lock.acquire()
if Lock.no_remote_has_tty is None:
# mark the tty lock as being in use so that the runtime
# can try to avoid clobbering any connection from a child
# that's currently relying on it.
Lock.no_remote_has_tty = trio.Event()
Lock.global_actor_in_debug = uid
log.runtime(f"TTY lock acquired, remote task: {task_name}:{uid}")
# NOTE: critical section: this yield is unshielded!
# IF we received a cancel during the shielded lock entry of some
# next-in-queue requesting task, then the resumption here will
# result in that ``trio.Cancelled`` being raised to our caller
# (likely from ``lock_tty_for_child()`` below)! In
# this case the ``finally:`` below should trigger and the
# surrounding caller side context should cancel normally
# relaying back to the caller.
yield Lock._debug_lock
finally:
if (
we_acquired
and Lock._debug_lock.locked()
):
Lock._debug_lock.release()
# IFF there are no more requesting tasks queued up fire, the
# "tty-unlocked" event thereby alerting any monitors of the lock that
# we are now back in the "tty unlocked" state. This is basically
# and edge triggered signal around an empty queue of sub-actor
# tasks that may have tried to acquire the lock.
stats = Lock._debug_lock.statistics()
if (
not stats.owner
):
log.runtime(f"No more tasks waiting on tty lock! says {uid}")
if Lock.no_remote_has_tty is not None:
Lock.no_remote_has_tty.set()
Lock.no_remote_has_tty = None
Lock.global_actor_in_debug = None
log.runtime(
f"TTY lock released, remote task: {task_name}:{uid}"
)
@tractor.context
async def lock_tty_for_child(
ctx: tractor.Context,
subactor_uid: tuple[str, str]
) -> str:
'''
Lock the TTY in the root process of an actor tree in a new
inter-actor-context-task such that the ``pdbp`` debugger console
can be mutex-allocated to the calling sub-actor for REPL control
without interference by other processes / threads.
NOTE: this task must be invoked in the root process of the actor
tree. It is meant to be invoked as an rpc-task and should be
highly reliable at releasing the mutex complete!
'''
task_name = trio.lowlevel.current_task().name
if tuple(subactor_uid) in Lock._blocked:
log.warning(
f'Actor {subactor_uid} is blocked from acquiring debug lock\n'
f"remote task: {task_name}:{subactor_uid}"
)
ctx._enter_debugger_on_cancel = False
await ctx.cancel(f'Debug lock blocked for {subactor_uid}')
return 'pdb_lock_blocked'
# TODO: when we get to true remote debugging
# this will deliver stdin data?
log.debug(
"Attempting to acquire TTY lock\n"
f"remote task: {task_name}:{subactor_uid}"
)
log.debug(f"Actor {subactor_uid} is WAITING on stdin hijack lock")
Lock.shield_sigint()
try:
with (
trio.CancelScope(shield=True) as debug_lock_cs,
):
Lock._root_local_task_cs_in_debug = debug_lock_cs
async with _acquire_debug_lock_from_root_task(subactor_uid):
# indicate to child that we've locked stdio
await ctx.started('Locked')
log.debug(
f"Actor {subactor_uid} acquired stdin hijack lock"
)
# wait for unlock pdb by child
async with ctx.open_stream() as stream:
assert await stream.receive() == 'pdb_unlock'
return "pdb_unlock_complete"
finally:
Lock._root_local_task_cs_in_debug = None
Lock.unshield_sigint()
async def wait_for_parent_stdin_hijack(
actor_uid: tuple[str, str],
task_status: TaskStatus[trio.CancelScope] = trio.TASK_STATUS_IGNORED
):
'''
Connect to the root actor via a ``Context`` and invoke a task which
locks a root-local TTY lock: ``lock_tty_for_child()``; this func
should be called in a new task from a child actor **and never the
root*.
This function is used by any sub-actor to acquire mutex access to
the ``pdb`` REPL and thus the root's TTY for interactive debugging
(see below inside ``_breakpoint()``). It can be used to ensure that
an intermediate nursery-owning actor does not clobber its children
if they are in debug (see below inside
``maybe_wait_for_debugger()``).
'''
with trio.CancelScope(shield=True) as cs:
Lock._debugger_request_cs = cs
try:
async with get_root() as portal:
# this syncs to child's ``Context.started()`` call.
async with portal.open_context(
tractor._debug.lock_tty_for_child,
subactor_uid=actor_uid,
) as (ctx, val):
log.debug('locked context')
assert val == 'Locked'
async with ctx.open_stream() as stream:
# unblock local caller
try:
assert Lock.local_pdb_complete
task_status.started(cs)
await Lock.local_pdb_complete.wait()
finally:
# TODO: shielding currently can cause hangs...
# with trio.CancelScope(shield=True):
await stream.send('pdb_unlock')
# sync with callee termination
assert await ctx.result() == "pdb_unlock_complete"
log.debug('exitting child side locking task context')
except ContextCancelled:
log.warning('Root actor cancelled debug lock')
raise
finally:
Lock.local_task_in_debug = None
log.debug('Exiting debugger from child')
def mk_mpdb() -> tuple[MultiActorPdb, Callable]:
pdb = MultiActorPdb()
# signal.signal = pdbp.hideframe(signal.signal)
Lock.shield_sigint()
# XXX: These are the important flags mentioned in
# https://github.com/python-trio/trio/issues/1155
# which resolve the traceback spews to console.
pdb.allow_kbdint = True
pdb.nosigint = True
return pdb, Lock.unshield_sigint
async def _breakpoint(
debug_func,
# TODO:
# shield: bool = False
) -> None:
'''
Breakpoint entry for engaging debugger instance sync-interaction,
from async code, executing in actor runtime (task).
'''
__tracebackhide__ = True
actor = tractor.current_actor()
pdb, undo_sigint = mk_mpdb()
task_name = trio.lowlevel.current_task().name
# TODO: is it possible to debug a trio.Cancelled except block?
# right now it seems like we can kinda do with by shielding
# around ``tractor.breakpoint()`` but not if we move the shielded
# scope here???
# with trio.CancelScope(shield=shield):
# await trio.lowlevel.checkpoint()
if (
not Lock.local_pdb_complete
or Lock.local_pdb_complete.is_set()
):
Lock.local_pdb_complete = trio.Event()
# TODO: need a more robust check for the "root" actor
if (
not is_root_process()
and actor._parent_chan # a connected child
):
if Lock.local_task_in_debug:
# Recurrence entry case: this task already has the lock and
# is likely recurrently entering a breakpoint
if Lock.local_task_in_debug == task_name:
# noop on recurrent entry case but we want to trigger
# a checkpoint to allow other actors error-propagate and
# potetially avoid infinite re-entries in some subactor.
await trio.lowlevel.checkpoint()
return
# if **this** actor is already in debug mode block here
# waiting for the control to be released - this allows
# support for recursive entries to `tractor.breakpoint()`
log.warning(f"{actor.uid} already has a debug lock, waiting...")
await Lock.local_pdb_complete.wait()
await trio.sleep(0.1)
# mark local actor as "in debug mode" to avoid recurrent
# entries/requests to the root process
Lock.local_task_in_debug = task_name
# this **must** be awaited by the caller and is done using the
# root nursery so that the debugger can continue to run without
# being restricted by the scope of a new task nursery.
# TODO: if we want to debug a trio.Cancelled triggered exception
# we have to figure out how to avoid having the service nursery
# cancel on this task start? I *think* this works below:
# ```python
# actor._service_n.cancel_scope.shield = shield
# ```
# but not entirely sure if that's a sane way to implement it?
try:
with trio.CancelScope(shield=True):
await actor._service_n.start(
wait_for_parent_stdin_hijack,
actor.uid,
)
Lock.repl = pdb
except RuntimeError:
Lock.release()
if actor._cancel_called:
# service nursery won't be usable and we
# don't want to lock up the root either way since
# we're in (the midst of) cancellation.
return
raise
elif is_root_process():
# we also wait in the root-parent for any child that
# may have the tty locked prior
# TODO: wait, what about multiple root tasks acquiring it though?
if Lock.global_actor_in_debug == actor.uid:
# re-entrant root process already has it: noop.
return
# XXX: since we need to enter pdb synchronously below,
# we have to release the lock manually from pdb completion
# callbacks. Can't think of a nicer way then this atm.
if Lock._debug_lock.locked():
log.warning(
'Root actor attempting to shield-acquire active tty lock'
f' owned by {Lock.global_actor_in_debug}')
# must shield here to avoid hitting a ``Cancelled`` and
# a child getting stuck bc we clobbered the tty
with trio.CancelScope(shield=True):
await Lock._debug_lock.acquire()
else:
# may be cancelled
await Lock._debug_lock.acquire()
Lock.global_actor_in_debug = actor.uid
Lock.local_task_in_debug = task_name
Lock.repl = pdb
try:
# block here one (at the appropriate frame *up*) where
# ``breakpoint()`` was awaited and begin handling stdio.
log.debug("Entering the synchronous world of pdb")
debug_func(actor, pdb)
except bdb.BdbQuit:
Lock.release()
raise
# XXX: apparently we can't do this without showing this frame
# in the backtrace on first entry to the REPL? Seems like an odd
# behaviour that should have been fixed by now. This is also why
# we scrapped all the @cm approaches that were tried previously.
# finally:
# __tracebackhide__ = True
# # frame = sys._getframe()
# # last_f = frame.f_back
# # last_f.f_globals['__tracebackhide__'] = True
# # signal.signal = pdbp.hideframe(signal.signal)
def shield_sigint_handler(
signum: int,
frame: 'frame', # type: ignore # noqa
# pdb_obj: Optional[MultiActorPdb] = None,
*args,
) -> None:
'''
Specialized, debugger-aware SIGINT handler.
In childred we always ignore to avoid deadlocks since cancellation
should always be managed by the parent supervising actor. The root
is always cancelled on ctrl-c.
'''
__tracebackhide__ = True
uid_in_debug = Lock.global_actor_in_debug
actor = tractor.current_actor()
# print(f'{actor.uid} in HANDLER with ')
def do_cancel():
# If we haven't tried to cancel the runtime then do that instead
# of raising a KBI (which may non-gracefully destroy
# a ``trio.run()``).
if not actor._cancel_called:
actor.cancel_soon()
# If the runtime is already cancelled it likely means the user
# hit ctrl-c again because teardown didn't full take place in
# which case we do the "hard" raising of a local KBI.
else:
raise KeyboardInterrupt
any_connected = False
if uid_in_debug is not None:
# try to see if the supposed (sub)actor in debug still
# has an active connection to *this* actor, and if not
# it's likely they aren't using the TTY lock / debugger
# and we should propagate SIGINT normally.
chans = actor._peers.get(tuple(uid_in_debug))
if chans:
any_connected = any(chan.connected() for chan in chans)
if not any_connected:
log.warning(
'A global actor reported to be in debug '
'but no connection exists for this child:\n'
f'{uid_in_debug}\n'
'Allowing SIGINT propagation..'
)
return do_cancel()
# only set in the actor actually running the REPL
pdb_obj = Lock.repl
# root actor branch that reports whether or not a child
# has locked debugger.
if (
is_root_process()
and uid_in_debug is not None
# XXX: only if there is an existing connection to the
# (sub-)actor in debug do we ignore SIGINT in this
# parent! Otherwise we may hang waiting for an actor
# which has already terminated to unlock.
and any_connected
):
# we are root and some actor is in debug mode
# if uid_in_debug is not None:
if pdb_obj:
name = uid_in_debug[0]
if name != 'root':
log.pdb(
f"Ignoring SIGINT, child in debug mode: `{uid_in_debug}`"
)
else:
log.pdb(
"Ignoring SIGINT while in debug mode"
)
elif (
is_root_process()
):
if pdb_obj:
log.pdb(
"Ignoring SIGINT since debug mode is enabled"
)
if (
Lock._root_local_task_cs_in_debug
and not Lock._root_local_task_cs_in_debug.cancel_called
):
Lock._root_local_task_cs_in_debug.cancel()
# revert back to ``trio`` handler asap!
Lock.unshield_sigint()
# child actor that has locked the debugger
elif not is_root_process():
chan: Channel = actor._parent_chan
if not chan or not chan.connected():
log.warning(
'A global actor reported to be in debug '
'but no connection exists for its parent:\n'
f'{uid_in_debug}\n'
'Allowing SIGINT propagation..'
)
return do_cancel()
task = Lock.local_task_in_debug
if (
task
and pdb_obj
):
log.pdb(
f"Ignoring SIGINT while task in debug mode: `{task}`"
)
# TODO: how to handle the case of an intermediary-child actor
# that **is not** marked in debug mode? See oustanding issue:
# https://github.com/goodboy/tractor/issues/320
# elif debug_mode():
else: # XXX: shouldn't ever get here?
print("WTFWTFWTF")
raise KeyboardInterrupt
# NOTE: currently (at least on ``fancycompleter`` 0.9.2)
# it looks to be that the last command that was run (eg. ll)
# will be repeated by default.
# maybe redraw/print last REPL output to console since
# we want to alert the user that more input is expect since
# nothing has been done dur to ignoring sigint.
if (
pdb_obj # only when this actor has a REPL engaged
):
# XXX: yah, mega hack, but how else do we catch this madness XD
if pdb_obj.shname == 'xonsh':
pdb_obj.stdout.write(pdb_obj.prompt)
pdb_obj.stdout.flush()
# TODO: make this work like sticky mode where if there is output
# detected as written to the tty we redraw this part underneath
# and erase the past draw of this same bit above?
# pdb_obj.sticky = True
# pdb_obj._print_if_sticky()
# also see these links for an approach from ``ptk``:
# https://github.com/goodboy/tractor/issues/130#issuecomment-663752040
# https://github.com/prompt-toolkit/python-prompt-toolkit/blob/c2c6af8a0308f9e5d7c0e28cb8a02963fe0ce07a/prompt_toolkit/patch_stdout.py
# XXX LEGACY: lol, see ``pdbpp`` issue:
# https://github.com/pdbpp/pdbpp/issues/496
def _set_trace(
actor: tractor.Actor | None = None,
pdb: MultiActorPdb | None = None,
):
__tracebackhide__ = True
actor = actor or tractor.current_actor()
# start 2 levels up in user code
frame: Optional[FrameType] = sys._getframe()
if frame:
frame = frame.f_back # type: ignore
if (
frame
and pdb
and actor is not None
):
log.pdb(f"\nAttaching pdb to actor: {actor.uid}\n")
# no f!#$&* idea, but when we're in async land
# we need 2x frames up?
frame = frame.f_back
else:
pdb, undo_sigint = mk_mpdb()
# we entered the global ``breakpoint()`` built-in from sync
# code?
Lock.local_task_in_debug = 'sync'
pdb.set_trace(frame=frame)
breakpoint = partial(
_breakpoint,
_set_trace,
)
def _post_mortem(
actor: tractor.Actor,
pdb: MultiActorPdb,
) -> None:
'''
Enter the ``pdbpp`` port mortem entrypoint using our custom
debugger instance.
'''
log.pdb(f"\nAttaching to pdb in crashed actor: {actor.uid}\n")
# TODO: you need ``pdbpp`` master (at least this commit
# https://github.com/pdbpp/pdbpp/commit/b757794857f98d53e3ebbe70879663d7d843a6c2)
# to fix this and avoid the hang it causes. See issue:
# https://github.com/pdbpp/pdbpp/issues/480
# TODO: help with a 3.10+ major release if/when it arrives.
pdbp.xpm(Pdb=lambda: pdb)
post_mortem = partial(
_breakpoint,
_post_mortem,
)
async def _maybe_enter_pm(err):
if (
debug_mode()
# NOTE: don't enter debug mode recursively after quitting pdb
# Iow, don't re-enter the repl if the `quit` command was issued
# by the user.
and not isinstance(err, bdb.BdbQuit)
# XXX: if the error is the likely result of runtime-wide
# cancellation, we don't want to enter the debugger since
# there's races between when the parent actor has killed all
# comms and when the child tries to contact said parent to
# acquire the tty lock.
# Really we just want to mostly avoid catching KBIs here so there
# might be a simpler check we can do?
and not is_multi_cancelled(err)
):
log.debug("Actor crashed, entering debug mode")
try:
await post_mortem()
finally:
Lock.release()
return True
else:
return False
@acm
async def acquire_debug_lock(
subactor_uid: tuple[str, str],
) -> AsyncGenerator[None, tuple]:
'''
Grab root's debug lock on entry, release on exit.
This helper is for actor's who don't actually need
to acquired the debugger but want to wait until the
lock is free in the process-tree root.
'''
if not debug_mode():
yield None
return
async with trio.open_nursery() as n:
cs = await n.start(
wait_for_parent_stdin_hijack,
subactor_uid,
)
yield None
cs.cancel()
async def maybe_wait_for_debugger(
poll_steps: int = 2,
poll_delay: float = 0.1,
child_in_debug: bool = False,
) -> None:
if (
not debug_mode()
and not child_in_debug
):
return
if (
is_root_process()
):
# If we error in the root but the debugger is
# engaged we don't want to prematurely kill (and
# thus clobber access to) the local tty since it
# will make the pdb repl unusable.
# Instead try to wait for pdb to be released before
# tearing down.
sub_in_debug = None
for _ in range(poll_steps):
if Lock.global_actor_in_debug:
sub_in_debug = tuple(Lock.global_actor_in_debug)
log.debug('Root polling for debug')
with trio.CancelScope(shield=True):
await trio.sleep(poll_delay)
# TODO: could this make things more deterministic? wait
# to see if a sub-actor task will be scheduled and grab
# the tty lock on the next tick?
# XXX: doesn't seem to work
# await trio.testing.wait_all_tasks_blocked(cushion=0)
debug_complete = Lock.no_remote_has_tty
if (
(debug_complete and
not debug_complete.is_set())
):
log.debug(
'Root has errored but pdb is in use by '
f'child {sub_in_debug}\n'
'Waiting on tty lock to release..')
await debug_complete.wait()
await trio.sleep(poll_delay)
continue
else:
log.debug(
'Root acquired TTY LOCK'
)

View File

@ -0,0 +1,157 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Actor discovery API.
"""
from typing import (
Optional,
Union,
AsyncGenerator,
)
from contextlib import asynccontextmanager as acm
from ._ipc import _connect_chan, Channel
from ._portal import (
Portal,
open_portal,
LocalPortal,
)
from ._state import current_actor, _runtime_vars
@acm
async def get_arbiter(
host: str,
port: int,
) -> AsyncGenerator[Union[Portal, LocalPortal], None]:
'''Return a portal instance connected to a local or remote
arbiter.
'''
actor = current_actor()
if not actor:
raise RuntimeError("No actor instance has been defined yet?")
if actor.is_arbiter:
# we're already the arbiter
# (likely a re-entrant call from the arbiter actor)
yield LocalPortal(actor, Channel((host, port)))
else:
async with _connect_chan(host, port) as chan:
async with open_portal(chan) as arb_portal:
yield arb_portal
@acm
async def get_root(
**kwargs,
) -> AsyncGenerator[Portal, None]:
host, port = _runtime_vars['_root_mailbox']
assert host is not None
async with _connect_chan(host, port) as chan:
async with open_portal(chan, **kwargs) as portal:
yield portal
@acm
async def query_actor(
name: str,
arbiter_sockaddr: Optional[tuple[str, int]] = None,
) -> AsyncGenerator[tuple[str, int], None]:
'''
Simple address lookup for a given actor name.
Returns the (socket) address or ``None``.
'''
actor = current_actor()
async with get_arbiter(
*arbiter_sockaddr or actor._arb_addr
) as arb_portal:
sockaddr = await arb_portal.run_from_ns(
'self',
'find_actor',
name=name,
)
# TODO: return portals to all available actors - for now just
# the last one that registered
if name == 'arbiter' and actor.is_arbiter:
raise RuntimeError("The current actor is the arbiter")
yield sockaddr if sockaddr else None
@acm
async def find_actor(
name: str,
arbiter_sockaddr: tuple[str, int] | None = None
) -> AsyncGenerator[Optional[Portal], None]:
'''
Ask the arbiter to find actor(s) by name.
Returns a connected portal to the last registered matching actor
known to the arbiter.
'''
async with query_actor(
name=name,
arbiter_sockaddr=arbiter_sockaddr,
) as sockaddr:
if sockaddr:
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal
else:
yield None
@acm
async def wait_for_actor(
name: str,
arbiter_sockaddr: tuple[str, int] | None = None
) -> AsyncGenerator[Portal, None]:
"""Wait on an actor to register with the arbiter.
A portal to the first registered actor is returned.
"""
actor = current_actor()
async with get_arbiter(
*arbiter_sockaddr or actor._arb_addr,
) as arb_portal:
sockaddrs = await arb_portal.run_from_ns(
'self',
'wait_for_actor',
name=name,
)
sockaddr = sockaddrs[-1]
async with _connect_chan(*sockaddr) as chan:
async with open_portal(chan) as portal:
yield portal

138
tractor/_entry.py 100644
View File

@ -0,0 +1,138 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Sub-process entry points.
"""
from __future__ import annotations
from functools import partial
from typing import (
Any,
TYPE_CHECKING,
)
import trio # type: ignore
from .log import (
get_console_log,
get_logger,
)
from . import _state
from .to_asyncio import run_as_asyncio_guest
from ._runtime import (
async_main,
Actor,
)
if TYPE_CHECKING:
from ._spawn import SpawnMethodKey
log = get_logger(__name__)
def _mp_main(
actor: Actor, # type: ignore
accept_addr: tuple[str, int],
forkserver_info: tuple[Any, Any, Any, Any, Any],
start_method: SpawnMethodKey,
parent_addr: tuple[str, int] | None = None,
infect_asyncio: bool = False,
) -> None:
'''
The routine called *after fork* which invokes a fresh ``trio.run``
'''
actor._forkserver_info = forkserver_info
from ._spawn import try_set_start_method
spawn_ctx = try_set_start_method(start_method)
if actor.loglevel is not None:
log.info(
f"Setting loglevel for {actor.uid} to {actor.loglevel}")
get_console_log(actor.loglevel)
assert spawn_ctx
log.info(
f"Started new {spawn_ctx.current_process()} for {actor.uid}")
_state._current_actor = actor
log.debug(f"parent_addr is {parent_addr}")
trio_main = partial(
async_main,
actor,
accept_addr,
parent_addr=parent_addr
)
try:
if infect_asyncio:
actor._infected_aio = True
run_as_asyncio_guest(trio_main)
else:
trio.run(trio_main)
except KeyboardInterrupt:
pass # handle it the same way trio does?
finally:
log.info(f"Actor {actor.uid} terminated")
def _trio_main(
actor: Actor, # type: ignore
*,
parent_addr: tuple[str, int] | None = None,
infect_asyncio: bool = False,
) -> None:
'''
Entry point for a `trio_run_in_process` subactor.
'''
log.info(f"Started new trio process for {actor.uid}")
if actor.loglevel is not None:
log.info(
f"Setting loglevel for {actor.uid} to {actor.loglevel}")
get_console_log(actor.loglevel)
log.info(
f"Started {actor.uid}")
_state._current_actor = actor
log.debug(f"parent_addr is {parent_addr}")
trio_main = partial(
async_main,
actor,
parent_addr=parent_addr
)
try:
if infect_asyncio:
actor._infected_aio = True
run_as_asyncio_guest(trio_main)
else:
trio.run(trio_main)
except KeyboardInterrupt:
log.warning(f"Actor {actor.uid} received KBI")
finally:
log.info(f"Actor {actor.uid} terminated")

View File

@ -1,33 +1,58 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
""" """
Our classy exception set. Our classy exception set.
""" """
from typing import (
Any,
Optional,
Type,
)
import importlib import importlib
import builtins import builtins
import traceback import traceback
import exceptiongroup as eg
import trio
_this_mod = importlib.import_module(__name__) _this_mod = importlib.import_module(__name__)
class ActorFailure(Exception):
"General actor failure"
class RemoteActorError(Exception): class RemoteActorError(Exception):
# TODO: local recontruction of remote exception deats # TODO: local recontruction of remote exception deats
"Remote actor exception bundled locally" "Remote actor exception bundled locally"
def __init__(self, message, type_str, **msgdata): def __init__(
self,
message: str,
suberror_type: Optional[Type[BaseException]] = None,
**msgdata
) -> None:
super().__init__(message) super().__init__(message)
for ns in [builtins, _this_mod]:
try:
self.type = getattr(ns, type_str)
break
except AttributeError:
continue
else:
self.type = Exception
self.type = suberror_type
self.msgdata = msgdata self.msgdata = msgdata
# TODO: a trio.MultiError.catch like context manager
# for catching underlying remote errors of a particular type
class InternalActorError(RemoteActorError): class InternalActorError(RemoteActorError):
"""Remote internal ``tractor`` error indicating """Remote internal ``tractor`` error indicating
@ -35,6 +60,14 @@ class InternalActorError(RemoteActorError):
""" """
class TransportClosed(trio.ClosedResourceError):
"Underlying channel transport was closed prior to use"
class ContextCancelled(RemoteActorError):
"Inter-actor task context cancelled itself on the callee side."
class NoResult(RuntimeError): class NoResult(RuntimeError):
"No final result is expected for this actor" "No final result is expected for this actor"
@ -43,24 +76,102 @@ class ModuleNotExposed(ModuleNotFoundError):
"The requested module is not exposed for RPC" "The requested module is not exposed for RPC"
def pack_error(exc): class NoRuntime(RuntimeError):
"The root actor has not been initialized yet"
class StreamOverrun(trio.TooSlowError):
"This stream was overrun by sender"
class AsyncioCancelled(Exception):
'''
Asyncio cancelled translation (non-base) error
for use with the ``to_asyncio`` module
to be raised in the ``trio`` side task
'''
def pack_error(
exc: BaseException,
tb=None,
) -> dict[str, Any]:
"""Create an "error message" for tranmission over """Create an "error message" for tranmission over
a channel (aka the wire). a channel (aka the wire).
""" """
if tb:
tb_str = ''.join(traceback.format_tb(tb))
else:
tb_str = traceback.format_exc()
return { return {
'error': { 'error': {
'tb_str': traceback.format_exc(), 'tb_str': tb_str,
'type_str': type(exc).__name__, 'type_str': type(exc).__name__,
} }
} }
def unpack_error(msg, chan=None, err_type=RemoteActorError): def unpack_error(
"""Unpack an 'error' message from the wire
msg: dict[str, Any],
chan=None,
err_type=RemoteActorError
) -> Exception:
'''
Unpack an 'error' message from the wire
into a local ``RemoteActorError``. into a local ``RemoteActorError``.
"""
tb_str = msg['error'].get('tb_str', '') '''
return err_type( __tracebackhide__ = True
f"{chan.uid}\n" + tb_str, error = msg['error']
tb_str = error.get('tb_str', '')
message = f"{chan.uid}\n" + tb_str
type_name = error['type_str']
suberror_type: Type[BaseException] = Exception
if type_name == 'ContextCancelled':
err_type = ContextCancelled
suberror_type = trio.Cancelled
else: # try to lookup a suitable local error type
for ns in [
builtins,
_this_mod,
eg,
trio,
]:
try:
suberror_type = getattr(ns, type_name)
break
except AttributeError:
continue
exc = err_type(
message,
suberror_type=suberror_type,
# unpack other fields into error type init
**msg['error'], **msg['error'],
) )
return exc
def is_multi_cancelled(exc: BaseException) -> bool:
'''
Predicate to determine if a possible ``eg.BaseExceptionGroup`` contains
only ``trio.Cancelled`` sub-exceptions (and is likely the result of
cancelling a collection of subtasks.
'''
if isinstance(exc, eg.BaseExceptionGroup):
return exc.subgroup(
lambda exc: isinstance(exc, trio.Cancelled)
) is not None
return False

Some files were not shown because too many files have changed in this diff Show More