For now stop `.aclose()`-ing all async gens on portal close since it can
cause hangs and other weird behaviour if another task operates on the
same instance.
See https://bugs.python.org/issue32526.
Use an inner function / closure to properly process required arguments
at call time as is recommended in the `wrap` docs. Do async gen and
arg introspection at decorate time and raise appropriate type errors.
Turns out you get a bad situation if the target actor who's task you're
trying to cancel has already died (eg. from an external
`KeyboardInterrupt` or other error) and so we need to eventually bail on
the RPC request. Also don't bother closing the channel created in
`open_portal()` manually since the cancel scope should take care of all
that.
- when calling the async gen func provided by the user wrap it in
`@async_generator.aclosing` to ensure correct teardown at cancel time
- expect the gen to yield a dict with topic keys and data values
- add a `packetizer` function argument to the api allowing a user
to format the data to be published in whatever way desired
- support using the decorator without the parentheses (using default
arguments)
- use a `wrapt` "adapter" to override the signature presented to the
`_actor._invoke` inspection machinery
- handle the default case where `tasks` isn't provided; allow only one
concurrent publisher task
- store task locks in an actor local variable
- add a comprehensive doc string
Use the new `Actor.cancel_task()` api to remotely cancel streaming
tasks spawned by a portal. This guarantees that if an actor is
cancelled all its (remote) portal spawned tasks will be as well.
On portal teardown only cancel all async
generator calls (though we should cancel all RPC requests in general
eventually) and don't close the channel since it may have been passed
in from some other context that wishes to keep it connected. In
`open_portal()` run the message loop shielded so that if the local
task is cancelled, messaging will continue until the internal scope
is cancelled at end of block.
Enable cancelling specific tasks from a peer actor such that when
a actor task or the actor itself is cancelled, remotely spawned tasks
can also be cancelled. In much that same way that you'd expect a node
(task) in the `trio` task tree to cancel any subtasks, actors should
be able to cancel any tasks they spawn in separate processes.
To enable this:
- track rpc tasks in a flat dict keyed by (chan, cid)
- store a `is_complete` event to enable waiting on specific
tasks to complete
- allow for shielding the msg loop inside an internal cancel scope
if requested by the caller; there was an issue with `open_portal()`
where the channel would be torn down because the current task was
cancelled but we still need messaging to continue until the portal
block is exited
- throw an error if the arbiter tries to find itself for now
Add a draft pub-sub API `@tractor.msg.pub` which allows
for decorating an asyn generator which can stream topic keyed
dictionaries for delivery to multiple calling / consuming tasks.
Instead of chan/cid, whenever a remote function defines a `ctx` argument
name deliver a `Context` instance to the function. This allows remote
funcs to provide async generator like streaming replies (and maybe more
later).
Additionally,
- load actor modules *after* establishing a connection to the spawning
parent to avoid crashing before the error can be reported upwards
- fix a bug to do with unpacking and raising local internal actor errors
from received messages
RPC module/function lookups should not cause the target actor to crash.
This change instead ships the error back to the calling actor allowing
for the remote actor to continue running depending on the caller's
error handling logic. Adds a new `ModuleNotExposed` error to accommodate.
I'm not sure how this ever worked but when a "fake" async gen
(i.e. function with special `chan`, `cid` kwargs) is completed
we need to signal the end of the stream just like with normal
async gens. Also don't fail when trying to remove tasks that were
never tracked.
Fixes#46