I'm not sure how this ever worked but when a "fake" async gen
(i.e. function with special `chan`, `cid` kwargs) is completed
we need to signal the end of the stream just like with normal
async gens. Also don't fail when trying to remove tasks that were
never tracked.
Fixes#46
At the expense of a bit more complexity in `ActorNursery.wait()`
(which I commented the heck out of fwiw) this adds far superior and
correct cancellation semantics for when a nursery is cancelled due
to (remote) errors in subactors.
This includes:
- `wait()` will now raise a `trio.MultiError` if multiple subactors
error with the same semantics as in `trio`.
- in `wait()` portals which are paired with `run_in_actor()`
spawned subactors (versus `start_actor()`) are waited on separately
and if the nursery **hasn't** been cancelled but there are errors
those are raised immediately before waiting on `start_actor()`
subactors which will block indefinitely if they haven't been
explicitly cancelled.
- if `wait()` does raise when the nursery hasn't yet been cancelled
it's expected that it will be called again depending on the actor
supervision strategy (i.e. right now we operate with a one-cancels-all
strategy, the same as `trio`, so `ActorNursery.__aexit__() calls
`cancel()` if any error is raised by `wait()`).
Oh and I added `is_main_process()` helper; can't remember why..
Use the new custom error types throughout the actor and portal
primitives and set a few new rules:
- internal errors are any error not raised by an rpc task and are
**not** forwarded to portals but instead are raised directly in
the msg loop.
- portals always re-raise a "main task" error for every call to
``Portal.result()``.
When an actor has already been registered with the arbiter it should
exist in the registry and thus the wait event should have been removed.
Check that the registry indeed holds an event before clearing it.
This is purely for documentation purposes for now as it should be
obvious a bunch of the signatures aren't using the correct "generics"
syntax (i.e. the use of `(str, int)` instead of `typing.Tuple[str, int])`)
in a bunch of places. We're also not using a type checker yet and besides,
`trio` doesn't really expose a lot of its internal types very well.
2SQASH
Something changed in 3.7 (likely to do with changes to the core
import system) that requires explicitly importing our version
of `forkserver.main()` in order to guarantee the server runs our
module code. Override `forkserver.ensure_running()`; specifically,
modify the python launch command.
This ensures that internal errors received from a remote actor are
indeed raised even in the `MainProcess` **before** comms tasks are
cancelled. Internal error in this case means any error packet received
on a channel that doesn't have a `cid` header. RPC errors (which **do**
have a `cid` header) are still forwarded to the consuming caller as usual.
If an internal error is bubbled up from some sub-actor throw that error
into the `MainProcess` "main" async function / coro in order to trigger
nursery teardowns (i.e. cancellations) that need to be done.
I'll likely change this shortly back to where we run a "main task"
inside `actor._async_main()`...
Allows for waiting on another actor (by name) to register with the
arbiter. This makes synchronized actor spawning and consecutive task
coordination easier to accomplish from within sub-actors.
Resolves#31
This allows for registering more then one actor with the same "name"
when you have multiple actors fulfilling the same role. Eventually
we'll need support for looking up all actors registered under a given
"service name" (or whatever we decide to call it).
Also, a fix to the arbiter such that each new instance refers to a
separate `_registry` dict (found an issue with duplicate names during
testing).
Resolves#7
Start a forkserver once in the main (parent-most) process
and pass ipc info (fds) to subprocesses manually such that embedded
calls to `multiprocessing.Process.start()` just work. Note that this
relies on our overridden version of the stdlib's
`multiprocessing.forkserver` module.
Resolves#6
The stdlib insists on creating multiple forkservers and semaphore trackers
for each sub-sub-process launched. This isn't ideal since it costs each
`tractor` sub-actor an additional 2 more processes then necessary and is
confusing when viewed as a process tree (eg. via `pstree`).
The majority of the change is simply avoiding the call to
`forkserver.ensure_running()` and `semaphore_tracker.ensure_running()`
in `ForkServer.connect_new_process()` and instead treating the user like
an adult and expecting those calls to be made *once* in the parent most
process (i.e. what `multiprocessing` calls the `MainProcess`).
Really a proper patch should be made against cpython which allows for
similar manual management of the server along with a mechanism to communicate
forkserver and semaphore tracker fd info to sub-processes such that
further calls to `Process.start()` work as expected.
Relates to #6
Stop worrying about a "main task" in each actor and instead add an
additional `ActorNursery.run_in_actor()` method which wraps calls
to create an actor and run a lone RPC task inside it. Note this
adjusts the public API of `ActorNursery.start_actor()` to drop
its `main` kwarg.
The dirty deats of making this possible:
- each spawned RPC task is now tracked with a specific cancel scope such
that when the actor is cancelled all ongoing responders are cancelled
before any IPC/channel machinery is closed (turns out that spawning
new actors from `outlive_main=True` actors was probably borked before
finally getting this working).
- make each initial RPC response be a packet which describes the
`functype` (eg. `{'functype': 'asyncfunction'}`) allowing for async
calls/submissions by client actors (this was required to make
`run_in_actor()` work - `Portal._submit()` is the new async method).
- hooray we can stop faking "main task" results for daemon actors
- add better handling/raising of internal errors caught in the bowels of
the `Actor` itself.
- drop the rpc spawning nursery; just use the `Actor._root_nursery`
- only wait on `_no_more_peers` if there are existing peer channels that
are actually still connected.
- an `ActorNursery.__aexit__()` now implicitly waits on `Portal.result()` on close
for each `run_in_actor()` spawned actor.
- handle cancelling partial started actors which haven't yet connected
back to the parent
Resolves#24
Take @njsmith's advice and properly close actor invoked async generators
using `async_generator.aclosing()` instead of hacking it (as previous)
with a shielded cancel scope.
Cancellation requires that each actor cancel it's spawned subactors
before cancelling its own root (nursery's) cancel scope to avoid breaking
channel connections before kill commands (`Actor.cancel()`) have been sent
off to peers. To solve this, ensure each main task is cancelled to
completion first (which will guarantee that all actor nurseries have
completed their cancellation steps) before cancelling the actor's "core"
tasks under the "root" scope.
Here is a bunch of code tightening to make sure cancellation works even
if recently spawned actors haven't fully started up and the parent is
cancelled.
The fixes include:
- passing the arbiter socket address to each actor
- ensure all spawned actors respect the spawner's log level
- handle process versus final `portal.result()` teardown in multiple
tasks such that if a proc dies before shipping a result we don't wait
- more detailed debug logging in teardown code paths
- don't store peer connected events in the same `dict` as the peer channels
- if necessary fake main task results on peer channel disconnect
- warn when a `trio.Cancelled` is what causes a nursery to bail
otherwise error
- store the subactor portal in the nursery for teardown purposes
- add dedicated `Portal.cancel_actor()` which acts as a "hard cancel"
and never blocks (indefinitely)
- add `Arbiter.unregister_actor()` it's more explicit what's being
requested
- Allow passing in a program-wide `loglevel`
- Add detailed debug logging particularly to do with channel msg processing
and connection handling
- Don't daemonize subprocesses for now as it prevents use of
sub-sub-actors (need to solve #6 first)
- Add a `Portal.close()` which just tells the remote actor to tear down
the channel (for now)
- Add a message to signal the remote `StopAsyncIteration` from an async
gen such that the client side terminates properly as well
- Make `Actor.cancel()` cancel the channel server first
- Actors *must* complete the arbiter registeration steps before moving
on with their main taks and rpc handling
- When delivering rpc responses (using the local per caller queue) use
the blocking interface (`trio.Queue.put()`) to get backpressure
- Properly detect an `partial` wrapped async generators in `_invoke`
Fix quite a few little bugs:
- async gen func detection in `_invoke()`
- always cancel channel server on main task exit
- wait for remaining channel peers after unsub from arbiter
- return result from main task(s) all the way up to `tractor.run()`
Also add a `Portal.result()` for getting the final result(s) from the
actor's main task and fix up a bunch of docs.