Commit Graph

453 Commits (f72eabd42aa32426c4db087ac325ace26bf5cde5)

Author SHA1 Message Date
Tyler Goodlet 78ddd33e3a Move to `trio.CancelScope` 2019-02-16 14:25:06 -05:00
Tyler Goodlet 02e0c0e1a4 `trio.ClosedResourceError is deprecated 2019-02-16 14:05:24 -05:00
Tyler Goodlet fe1c4dbc4c mpypy and docs fixups 2019-02-16 14:05:03 -05:00
Tyler Goodlet 616192d853 Don't use async gen functions for the stream API
As mentioned in prior commits there's currently a bug in Python that
make async gens **not** task safe. Since this is the core cause of almost
all recent problems, instead implement our own async iterator derivative of
`trio.abc.ReceiveChannel` by wrapping a `trio._channel.MemoryReceiveChannel`.
This fits more natively with the memory channel API in ``trio`` and adds
potentially more flexibility for possible bidirectional inter-actor streaming
in the future.

Huge thanks to @oremanj and of course @njsmith for guidance on this one!
2019-02-15 21:59:42 -05:00
Tyler Goodlet b91d13cfea Use local actor var 2019-02-15 17:11:26 -05:00
Tyler Goodlet 61680b3729 Use a receive mem channel inside portals
For now stop `.aclose()`-ing all async gens on portal close since it can
cause hangs and other weird behaviour if another task operates on the
same instance.

See https://bugs.python.org/issue32526.
2019-02-15 16:27:18 -05:00
Tyler Goodlet f44ac4528a Use mem chan in actor core 2019-02-15 16:23:58 -05:00
Tyler Goodlet 3d0de25f93 Do proper `wrapt` arg extraction for type checking
Use an inner function / closure to properly process required arguments
at call time as is recommended in the `wrap` docs. Do async gen and
arg introspection at decorate time and raise appropriate type errors.
2019-01-25 00:10:13 -05:00
Tyler Goodlet 1b405ab4fe s/tickers/topics 2019-01-23 22:35:59 -05:00
Tyler Goodlet 3b19e15306 Don't allow cancelling a cancel_task() task 2019-01-23 20:01:29 -05:00
Tyler Goodlet 855f959768 Don't log traceback on kb interrupt 2019-01-23 20:00:57 -05:00
Tyler Goodlet 9f41297298 Timeout on remote task cancellation
Turns out you get a bad situation if the target actor who's task you're
trying to cancel has already died (eg. from an external
`KeyboardInterrupt` or other error) and so we need to eventually bail on
the RPC request. Also don't bother closing the channel created in
`open_portal()` manually since the cancel scope should take care of all
that.
2019-01-23 19:20:13 -05:00
Tyler Goodlet 226312042a Fix type annots 2019-01-23 00:41:45 -05:00
Tyler Goodlet b6cc1e8c22 More pub decorator improvements
- when calling the async gen func provided by the user wrap it in
  `@async_generator.aclosing` to ensure correct teardown at cancel time
- expect the gen to yield a dict with topic keys and data values
- add a `packetizer` function argument to the api allowing a user
  to format the data to be published in whatever way desired
- support using the decorator without the parentheses (using default
  arguments)
- use a `wrapt` "adapter" to override the signature presented to the
  `_actor._invoke` inspection machinery
- handle the default case where `tasks` isn't provided; allow only one
  concurrent publisher task
- store task locks in an actor local variable
- add a comprehensive doc string
2019-01-21 09:21:12 -05:00
Tyler Goodlet 97f709cc14 Cancel remote streaming tasks on a local cancel
Use the new `Actor.cancel_task()` api to remotely cancel streaming
tasks spawned by a portal. This guarantees that if an actor is
cancelled all its (remote) portal spawned tasks will be as well.

On portal teardown only cancel all async
generator calls (though we should cancel all RPC requests in general
eventually) and don't close the channel since it may have been passed
in from some other context that wishes to keep it connected. In
`open_portal()` run the message loop shielded so that if the local
task is cancelled, messaging will continue until the internal scope
is cancelled at end of block.
2019-01-21 00:45:54 -05:00
Tyler Goodlet 03e00886da Add `Actor.cancel_task()`
Enable cancelling specific tasks from a peer actor such that when
a actor task or the actor itself is cancelled, remotely spawned tasks
can also be cancelled. In much that same way that you'd expect a node
(task) in the `trio` task tree to cancel any subtasks, actors should
be able to cancel any tasks they spawn in separate processes.

To enable this:
- track rpc tasks in a flat dict keyed by (chan, cid)
- store a `is_complete` event to enable waiting on specific
  tasks to complete
- allow for shielding the msg loop inside an internal cancel scope
  if requested by the caller; there was an issue with `open_portal()`
  where the channel would be torn down because the current task was
  cancelled but we still need messaging to continue until the portal
  block is exited
- throw an error if the arbiter tries to find itself for now
2019-01-21 00:38:07 -05:00
Tyler Goodlet 251ee177fa Make the `Context` a dataclass 2019-01-20 21:47:08 -05:00
Tyler Goodlet 76f7ae5cf4 Log about the loglevel 2019-01-16 17:09:30 -05:00
Tyler Goodlet c58a6ea80f Fix type annots 2019-01-16 16:50:30 -05:00
Tyler Goodlet fbb6af47f8 Add a pub-sub messaging decorator API
Add a draft pub-sub API `@tractor.msg.pub` which allows
for decorating an asyn generator which can stream topic keyed
dictionaries for delivery to multiple calling / consuming tasks.
2019-01-16 12:19:01 -05:00
Tyler Goodlet 06c908f285 Wrap remote import-time errors just the same 2019-01-12 17:56:22 -05:00
Tyler Goodlet fffddf88dd Change parent type 2019-01-12 17:55:28 -05:00
Tyler Goodlet 7377598683 Properly respect `rpc_module_paths` in `run_in_actor()` 2019-01-12 17:55:08 -05:00
Tyler Goodlet be20e1488b Fix type annotations 2019-01-12 15:32:41 -05:00
Tyler Goodlet 41f2096e86 Adopt `Context` in the RPC core
Instead of chan/cid, whenever a remote function defines a `ctx` argument
name deliver a `Context` instance to the function. This allows remote
funcs to provide async generator like streaming replies (and maybe more
later).

Additionally,
- load actor modules *after* establishing a connection to the spawning
  parent to avoid crashing before the error can be reported upwards
- fix a bug to do with unpacking and raising local internal actor errors
  from received messages
2019-01-12 15:27:38 -05:00
Tyler Goodlet 87a6165430 Add a `Context` type for task/protocol RPC state
This is loosely based off the nanomsg concept found here:
https://nanomsg.github.io/nng/man/v1.1.0/nng_ctx.5
2019-01-12 14:31:17 -05:00
Tyler Goodlet a9932e6c01 Allow passing error type to `unpack_error()` 2019-01-12 13:26:02 -05:00
Tyler Goodlet 5fab61412c Propagate module import and func lookup errors
RPC module/function lookups should not cause the target actor to crash.
This change instead ships the error back to the calling actor allowing
for the remote actor to continue running depending on the caller's
error handling logic. Adds a new `ModuleNotExposed` error to accommodate.
2019-01-01 16:12:34 -05:00
Tyler Goodlet ef23055d12 Use proper typing syntax 2019-01-01 12:14:57 -05:00
Tyler Goodlet eb6e82f577 Close all portal created async gens on shutdown 2018-12-15 02:20:55 -05:00
Tyler Goodlet db85e13657 Use a fifo lock for IPC 2018-12-15 02:20:19 -05:00
Tyler Goodlet d492236f3a Handle broken channels more resiliently on teardown 2018-12-15 02:19:47 -05:00
Tyler Goodlet 32c7a06e6a Cancel remote async gens when `aclose()` is called 2018-12-10 23:13:25 -05:00
Tyler Goodlet 4dccb44c67 Add support for cancelling remote tasks via a msg 2018-12-10 23:12:46 -05:00
goodboy 26ab45636e
Merge pull request #48 from tgoodlet/loglevel_to_tractor_tests
Support `loglevel` fixture injection
2018-11-30 08:34:52 -05:00
Tyler Goodlet a588047ad4 Stop channel based async gen streams on exit
I'm not sure how this ever worked but when a "fake" async gen
(i.e. function with special `chan`, `cid` kwargs) is completed
we need to signal the end of the stream just like with normal
async gens. Also don't fail when trying to remove tasks that were
never tracked.

Fixes #46
2018-11-30 01:24:59 -05:00
Tyler Goodlet f81e802219 Support `loglevel` fixture injection
For `pytest`, support defining a `loglevel` fixture value which will be
passed into internals when using `@tractor_test`.
2018-11-30 01:11:08 -05:00
Tyler Goodlet 512a2f25a2 Expose `tractor_test` in the same way as `trio` 2018-11-26 11:26:04 -05:00
Tyler Goodlet 0879150399 Move `tractor_test` to new module 2018-11-26 11:20:53 -05:00
Tyler Goodlet a482681f9c Leverage `pytest.raises()` better; fix a bunch of docs 2018-11-22 11:43:04 -05:00
Tyler Goodlet 0a240187c6 Log the exception when unable to ship back rpc errors 2018-11-19 16:52:55 -05:00
Tyler Goodlet 82fcf025cc Fix: MultiError isn't an Exception... 2018-11-19 14:16:09 -05:00
Tyler Goodlet 1bb37dbddf Expose trio.MultiError publicly 2018-11-19 14:15:28 -05:00
Tyler Goodlet 9bb8a062eb mypy fixes 2018-11-19 08:47:42 -05:00
Tyler Goodlet 835d1fa07a Vastly improve error triggered cancellation
At the expense of a bit more complexity in `ActorNursery.wait()`
(which I commented the heck out of fwiw) this adds far superior and
correct cancellation semantics for when a nursery is cancelled due
to (remote) errors in subactors.

This includes:
- `wait()` will now raise a `trio.MultiError` if multiple subactors
  error with the same semantics as in `trio`.
- in `wait()` portals which are paired with `run_in_actor()`
  spawned subactors (versus `start_actor()`) are waited on separately
  and if the nursery **hasn't** been cancelled but there are errors
  those are raised immediately before waiting on `start_actor()`
  subactors which will block indefinitely if they haven't been
  explicitly cancelled.
- if `wait()` does raise when the nursery hasn't yet been cancelled
  it's expected that it will be called again depending on the actor
  supervision strategy (i.e. right now we operate with a one-cancels-all
  strategy, the same as `trio`, so `ActorNursery.__aexit__() calls
  `cancel()` if any error is raised by `wait()`).

Oh and I added `is_main_process()` helper; can't remember why..
2018-11-19 08:44:19 -05:00
Tyler Goodlet e75b25dc21 Improve error propagation machinery
Use the new custom error types throughout the actor and portal
primitives and set a few new rules:
- internal errors are any error not raised by an rpc task and are
  **not** forwarded to portals but instead are raised directly in
  the msg loop.
- portals always re-raise a "main task" error for every call to
  ``Portal.result()``.
2018-11-19 04:05:07 -05:00
Tyler Goodlet 2f6609ab78 Add custom exceptions with msg (un)packing 2018-11-19 03:51:12 -05:00
Tyler Goodlet 71bb87aa3a Drop deprecated trio error 2018-11-09 01:53:15 -05:00
Tyler Goodlet 12fa5542b1 Oh, mypy... 2018-11-09 01:52:57 -05:00
Tyler Goodlet 2ce8e06619 Some minor doc/comment tweaks 2018-11-09 01:40:12 -05:00
Tyler Goodlet aa8238d5e0 Revert allowing multiple stream handlers; clutters test output 2018-11-09 01:35:51 -05:00
Tyler Goodlet 109b5971ed Don't overload `func` arg 2018-09-21 10:11:27 -04:00
Tyler Goodlet 2973d7f1de Await async funcs properly in `LocalPortal.run()` 2018-09-21 00:31:30 -04:00
Tyler Goodlet 65beb2d84e Top level actor must have a `main()` now 2018-09-14 16:34:13 -04:00
Tyler Goodlet 716a44b6b8 Better document `run_daemon()` 2018-09-14 16:33:45 -04:00
Tyler Goodlet 827a6c6014 Make `rpc_modules` a positional arg to `tractor.run_daemon()` 2018-09-10 22:31:23 -04:00
Tyler Goodlet d808ffd8f3 `Logger.warn()` is deprecated 2018-09-10 15:19:49 -04:00
Tyler Goodlet ee7959cb55 Fix same named actor race
When an actor has already been registered with the arbiter it should
exist in the registry and thus the wait event should have been removed.
Check that the registry indeed holds an event before clearing it.
2018-09-08 09:40:35 -04:00
Tyler Goodlet 6b8393a4d6 Add `tractor.run_daemon()` for running a main rpc daemon 2018-09-08 09:39:53 -04:00
Tyler Goodlet 0ca668453c Running without a main func is a type error 2018-09-05 18:13:23 -04:00
Tyler Goodlet 438a79707f Couple more type tweaks 2018-09-01 14:43:48 -04:00
Tyler Goodlet 086df43b59 Woot! mypy run is clean! 2018-08-31 17:16:24 -04:00
Tyler Goodlet 18c55e2b5f Type igore `colorlog` 2018-08-26 13:12:59 -04:00
Tyler Goodlet 11cbf9ea55 Use proper `typing` annotations 2018-08-26 13:12:29 -04:00
Tyler Goodlet c3eee1f228 Move "treat_as_gen" detection into `_invoke()` 2018-08-22 11:51:22 -04:00
Tyler Goodlet b0ceb308ba Add type annotations to most functions
This is purely for documentation purposes for now as it should be
obvious a bunch of the signatures aren't using the correct "generics"
syntax (i.e. the use of `(str, int)` instead of `typing.Tuple[str, int])`)
in a bunch of places. We're also not using a type checker yet and besides,
`trio` doesn't really expose a lot of its internal types very well.

2SQASH
2018-08-22 11:50:45 -04:00
Tyler Goodlet 996ad891f4 py3.6 is missing this attr 2018-08-19 16:11:57 -04:00
Tyler Goodlet 328e5bd597 Import our `forkserver.main()` in server cmd
Something changed in 3.7 (likely to do with changes to the core
import system) that requires explicitly importing our version
of `forkserver.main()` in order to guarantee the server runs our
module code. Override `forkserver.ensure_running()`; specifically,
modify the python launch command.
2018-08-19 15:37:01 -04:00
Tyler Goodlet 3202462cd5 Attach remote internal errors to channels
This ensures that internal errors received from a remote actor are
indeed raised even in the `MainProcess` **before** comms tasks are
cancelled. Internal error in this case means any error packet received
on a channel that doesn't have a `cid` header. RPC errors (which **do**
have a `cid` header) are still forwarded to the consuming caller as usual.
2018-08-17 14:49:17 -04:00
Tyler Goodlet 901f99bbec Throw internal errors into the main coroutine
If an internal error is bubbled up from some sub-actor throw that error
into the `MainProcess` "main" async function / coro in order to trigger
nursery teardowns (i.e. cancellations) that need to be done.

I'll likely change this shortly back to where we run a "main task"
inside `actor._async_main()`...
2018-08-16 00:22:16 -04:00
Tyler Goodlet f8111e51cd Maybe wait for actor result(s) after proc join 2018-08-16 00:21:49 -04:00
Tyler Goodlet d4da80c558 Store remote errors on each portal 2018-08-16 00:21:00 -04:00
Tyler Goodlet 73e8aac36c Always allow and enable rpc prior to task start 2018-08-15 01:24:06 -04:00
Tyler Goodlet 09e3a94060 Cancel result waiter once proc terminates 2018-08-15 01:24:06 -04:00
Tyler Goodlet 3f0c644768 Add `tractor.wait_for_actor()` helper
Allows for waiting on another actor (by name) to register with the
arbiter. This makes synchronized actor spawning and consecutive task
coordination easier to accomplish from within sub-actors.

Resolves #31
2018-08-12 23:59:19 -04:00
Tyler Goodlet 1bd5582d8a Register each actor using its unique ID tuple
This allows for registering more then one actor with the same "name"
when you have multiple actors fulfilling the same role. Eventually
we'll need support for looking up all actors registered under a given
"service name" (or whatever we decide to call it).

Also, a fix to the arbiter such that each new instance refers to a
separate `_registry` dict (found an issue with duplicate names during
testing).

Resolves #7
2018-08-07 14:03:01 -04:00
Tyler Goodlet 758fbc6790 Drop `Channel.aiter_recv()`
Internalize the implementation of this and expect client code to
iterate the `Channel` directly.

Resolves #16
2018-08-07 14:03:01 -04:00
Tyler Goodlet bd14cbe082 Port to trio's new resource error 2018-08-07 14:03:01 -04:00
Tyler Goodlet 163f747afb Drop legacy write_unsigned() 2018-08-07 14:02:42 -04:00
Tyler Goodlet e7c7391497 Drop needless tuple unpack 2018-08-04 18:15:24 -04:00
Tyler Goodlet 4fc7edf466 Change class names, use errno constant 2018-08-04 18:15:24 -04:00
Tyler Goodlet 4b875f0ade Be more explicit with naming and stdlib override 2018-08-04 18:15:24 -04:00
Tyler Goodlet 7017f68503 3.6.3 fix - missing older attr 2018-08-04 18:15:24 -04:00
Tyler Goodlet 50517c9488 Manage a `multiprocessing.forkserver` manually
Start a forkserver once in the main (parent-most) process
and pass ipc info (fds) to subprocesses manually such that embedded
calls to `multiprocessing.Process.start()` just work. Note that this
relies on our overridden version of the stdlib's
`multiprocessing.forkserver` module.

Resolves #6
2018-08-04 18:15:24 -04:00
Tyler Goodlet f46d5b2b62 Hackery to override the stdlib's forkserver
The stdlib insists on creating multiple forkservers and semaphore trackers
for each sub-sub-process launched. This isn't ideal since it costs each
`tractor` sub-actor an additional 2 more processes then necessary and is
confusing when viewed as a process tree (eg. via `pstree`).

The majority of the change is simply avoiding the call to
`forkserver.ensure_running()` and `semaphore_tracker.ensure_running()`
in `ForkServer.connect_new_process()` and instead treating the user like
an adult and expecting those calls to be made *once* in the parent most
process (i.e. what `multiprocessing` calls the `MainProcess`).

Really a proper patch should be made against cpython which allows for
similar manual management of the server along with a mechanism to communicate
forkserver and semaphore tracker fd info to sub-processes such that
further calls to `Process.start()` work as expected.

Relates to #6
2018-08-04 18:15:24 -04:00
Tyler Goodlet d6d7fea708 Use plain func for __aiter__() 2018-08-04 18:15:24 -04:00
Tyler Goodlet f7706074a2 Drop needless if check 2018-08-04 18:10:31 -04:00
Tyler Goodlet 1da69b1396 Allow daemonizing top level actor; don't require main func 2018-08-03 09:41:18 -04:00
Tyler Goodlet bb13b79df5 Drop the "main" task via kwarg idea
Stop worrying about a "main task" in each actor and instead add an
additional `ActorNursery.run_in_actor()` method which wraps calls
to create an actor and run a lone RPC task inside it. Note this
adjusts the public API of `ActorNursery.start_actor()` to drop
its `main` kwarg.

The dirty deats of making this possible:
- each spawned RPC task is now tracked with a specific cancel scope such
  that when the actor is cancelled all ongoing responders are cancelled
  before any IPC/channel machinery is closed (turns out that spawning
  new actors from `outlive_main=True` actors was probably borked before
  finally getting this working).
- make each initial RPC response be a packet which describes the
  `functype` (eg. `{'functype': 'asyncfunction'}`) allowing for async
  calls/submissions by client actors (this was required to make
  `run_in_actor()` work - `Portal._submit()` is the new async method).
- hooray we can stop faking "main task" results for daemon actors
- add better handling/raising of internal errors caught in the bowels of
  the `Actor` itself.
- drop the rpc spawning nursery; just use the `Actor._root_nursery`
- only wait on `_no_more_peers` if there are existing peer channels that
  are actually still connected.
- an `ActorNursery.__aexit__()` now implicitly waits on `Portal.result()` on close
  for each `run_in_actor()` spawned actor.
- handle cancelling partial started actors which haven't yet connected
  back to the parent

Resolves #24
2018-08-02 15:24:28 -04:00
Tyler Goodlet 9571f60a6d Expose channel in public api 2018-07-17 11:57:27 -04:00
Tyler Goodlet 64cbb922dc Reorg everything into private modules 2018-07-14 16:09:05 -04:00
Tyler Goodlet 1f85f71534 Use `async_generator`'s `aclosing()` helper
Take @njsmith's advice and properly close actor invoked async generators
using `async_generator.aclosing()` instead of hacking it (as previous)
with a shielded cancel scope.
2018-07-13 22:18:08 -04:00
Tyler Goodlet 2b7bbf32a1 One more super subtle cancellation fix
See python-trio/trio#455 for the deats...
2018-07-13 22:17:33 -04:00
Tyler Goodlet d9aa6119e1 Set cancelled state in cancel method 2018-07-11 22:24:14 -04:00
Tyler Goodlet 1ade5c5fbb Add onc-cancels-all strategy to actor nursery 2018-07-11 22:24:06 -04:00
Tyler Goodlet 25852794a8 Move chan connect helper to ipc mod 2018-07-11 22:20:13 -04:00
Tyler Goodlet 209a6a2096 Add a separate cancel scope for the main task
Cancellation requires that each actor cancel it's spawned subactors
before cancelling its own root (nursery's) cancel scope to avoid breaking
channel connections before kill commands (`Actor.cancel()`) have been sent
off to peers. To solve this, ensure each main task is cancelled to
completion first (which will guarantee that all actor nurseries have
completed their cancellation steps) before cancelling the actor's "core"
tasks under the "root" scope.
2018-07-11 22:20:13 -04:00
Tyler Goodlet 49573c9a03 More fixes to do cancellation correctly
Here is a bunch of code tightening to make sure cancellation works even
if recently spawned actors haven't fully started up and the parent is
cancelled.

The fixes include:
- passing the arbiter socket address to each actor
- ensure all spawned actors respect the spawner's log level
- handle process versus final `portal.result()` teardown in multiple
  tasks such that if a proc dies before shipping a result we don't wait
- more detailed debug logging in teardown code paths
- don't store peer connected events in the same `dict` as the peer channels
- if necessary fake main task results on peer channel disconnect
- warn when a `trio.Cancelled` is what causes a nursery to bail
  otherwise error
- store the subactor portal in the nursery for teardown purposes
- add dedicated `Portal.cancel_actor()` which acts as a "hard cancel"
  and never blocks (indefinitely)
- add `Arbiter.unregister_actor()` it's more explicit what's being
  requested
2018-07-11 22:20:13 -04:00
Tyler Goodlet 77e34049b8 More fixes after unit testing
- Allow passing in a program-wide `loglevel`
- Add detailed debug logging particularly to do with channel msg processing
  and connection handling
- Don't daemonize subprocesses for now as it prevents use of
  sub-sub-actors (need to solve #6 first)
- Add a `Portal.close()` which just tells the remote actor to tear down
  the channel (for now)
- Add a message to signal the remote `StopAsyncIteration` from an async
  gen such that the client side terminates properly as well
- Make `Actor.cancel()` cancel the channel server first
- Actors *must* complete the arbiter registeration steps before moving
  on with their main taks and rpc handling
- When delivering rpc responses (using the local per caller queue) use
  the blocking interface (`trio.Queue.put()`) to get backpressure
- Properly detect an `partial` wrapped async generators in `_invoke`
2018-07-11 22:20:13 -04:00
Tyler Goodlet 36fd75e217 Fix some bugs to get tests working
Fix quite a few little bugs:
- async gen func detection in `_invoke()`
- always cancel channel server on main task exit
- wait for remaining channel peers after unsub from arbiter
- return result from main task(s) all the way up to `tractor.run()`

Also add a `Portal.result()` for getting the final result(s) from the
actor's main task and fix up a bunch of docs.
2018-07-11 22:20:13 -04:00
Tyler Goodlet 6163d4e9ea Don't create formatter if no log level set 2018-07-10 17:28:29 -04:00
Tyler Goodlet c85752abd9 Steal piker's logging setup 2018-07-05 19:51:32 -04:00
Tyler Goodlet 8df706e535 Rename package dir to tractor 2018-07-05 19:40:36 -04:00