Commit Graph

137 Commits (f1a96c168074842cd14d92e687a155ceb6d81f50)

Author SHA1 Message Date
Tyler Goodlet 4b0554b61f Type checker fixes 2020-01-21 10:28:32 -05:00
Tyler Goodlet 6c45416016 Drop ActorNusery.wait(); it's no longer necessary really 2020-01-21 10:27:53 -05:00
Tyler Goodlet c074aea030 Support TRIP for process launching
This took a ton of tinkering and a rework of the actor nursery tear down
logic. The main changes include:

- each subprocess is now spawned from inside a trio task
from one of two containing nurseries created in the body of
`tractor.open_nursery()`: one for `run_in_actor()` processes and one for
`start_actor()` "daemons". This is to address the need for
`trio-run-in_process.open_in_process()` opening a nursery which must
be closed from the same task that opened it. Using this same approach
for `multiprocessing` seems to work well. The nurseries are waited in
order (rip actors then daemon actors) during tear down which allows
for avoiding the recursive re-entry of `ActorNursery.wait()` handled
prior.

- pull out all the nested functions / closures that were in
`ActorNursery.wait()` and move into the `_spawn` module such that
that process shutdown logic takes place in each containing task's
code path. This allows for vastly simplifying `.wait()` to just contain an
event trigger which initiates process waiting / result collection.
Likely `.wait()` should just be removed since it can no longer be used
to synchronously wait on the actor nursery.

- drop `ActorNursery.__aenter__()` / `.__atexit__()` and move this
"supervisor" tear down logic into the closing block of `open_nursery()`.
This not only cleans makes the code more comprehensible it also
makes our nursery implementation look more like the one in `trio`.

Resolves #93
2020-01-21 10:27:53 -05:00
Tyler Goodlet 91c3716968 Do module abspath loading in actor init 2020-01-21 10:27:53 -05:00
Tyler Goodlet afa640dcab More trip WIP stuff working.. kinda
Get a few more things working:
- fail reliably when remote module loading goes awry
- do a real hacky job of module loading using `sys.path` stuffsies
- we're still totally borked when trying to spin up and quickly cancel
a bunch of subactors...

It's a small move forward I guess.
2020-01-21 10:27:53 -05:00
Tyler Goodlet 1b7cdfe512 WIP trying out trio_run_in_process 2020-01-21 10:27:53 -05:00
Tyler Goodlet 698951c515 More mypy apeasement on 3.7 2020-01-15 21:06:13 -05:00
Tyler Goodlet e2c9477122 Allow overriding the root logger name
Handy if other dependent projects want to use the logging system but
also want to slap their own root "branding" onto the record prefix.
2019-12-20 16:37:17 -05:00
Tyler Goodlet 79c152fe38 Make latest mpypy happy 2019-12-10 00:55:03 -05:00
Tyler Goodlet 14bfef0df7 Update types for log adapter 2019-12-09 22:10:15 -05:00
Tyler Goodlet cf73283586 Make info object a mapping type
Make the info object a `Mapping` to play nicer with static type
checking. Simplify the task or actor context method lookup using a dict.
2019-12-09 00:03:22 -05:00
Tyler Goodlet 52efbfc2cd Log task and actor names where possible
Prepend the actor and task names in each log emission. This makes
debugging much more sane since you can see from which process and
running task the log message originates from!

Resolves #13
2019-12-01 23:26:25 -05:00
Tyler Goodlet d2a01e8b81 Drop use of `trio.Event.clear()`
Just spin up new events instead; because apparently they're
so cheap (rolls eyes).

Resolves #78
2019-11-23 11:29:23 -05:00
Tyler Goodlet f977d37cee Add nursery self-destruct logic on cancel failure
If a nursery fails to cancel (some sub-actors presumably) then hard kill
the whole process tree to avoid hangs during a catastrophic failure.
This logic may get factored out (and changed) as we introduce custom
supervisor strategies.
2019-11-22 17:11:48 -05:00
Tyler Goodlet 5e056bae71 Expose trio exceptions to `RemoteActorError` 2019-10-30 00:32:10 -04:00
Tyler Goodlet 95e8f3d306 Propagate `trio.MultiError`s up the actor tree
`trio.MultiError` isn't an `Exception` (derived instead from
`BaseException`) so we have to specially catch it in the task
invocation machinery and ship it upwards (like regular errors)
since nurseries running in sub-actors can raise them.
2019-10-28 00:47:06 -04:00
Tyler Goodlet da4796749f Continue hacking the forkserver in Python 3.8
They got all fancy and added shared memory segment tracking and then
had to "generalize" the tracker name...hooray

Fixes #81
2019-10-15 22:37:47 -04:00
Tyler Goodlet 7da95a806d Rename override module 2019-10-14 12:58:10 -04:00
Tyler Goodlet f885b02c73 Validate stream functions at decorate time 2019-03-29 19:10:32 -04:00
Tyler Goodlet 5c0ae47cf5 Fix type annotation 2019-03-26 08:03:12 -04:00
Tyler Goodlet e51f84af90 Require explicit marking of non async gen streaming funcs
Add `@tractor.stream` which must be used to denote non async generator
streaming functions which use the `tractor.Context` API to push values.
This enforces a more explicit denotation as well as allows enforcing the
declaration of the `ctx` argument in definitions.
2019-03-25 21:36:13 -04:00
Tyler Goodlet 4ee35038fb Move discovery functions to their own module 2019-03-24 11:37:11 -04:00
Tyler Goodlet 2aa6ffce60 Provide each task's cancel scope to every `Context`
This begins moving toward explicitly decorated "streaming functions"
instead of checking for a `ctx` arg in the signature.

- provide each context with its task's top level `trio.CancelScope`
  such that tasks can cancel themselves explictly if needed via calling
  `Context.cancel_scope()`
- make `Actor.cancel_task()` a private method (`_cancel_task()`) and
  handle remote rpc calls specially such that the caller does not need
  to provide the `chan` argument; non-primitive types can't be passed on
  the wire and we don't want the client actor be require knowledge of
  the channel instance the request is associated with. This also ties into
  how we're tracking tasks right now (`Actor._rpc_tasks` is keyed by the
  call id, a UUID, *plus* the channel).
- make `_do_handshake` a private actor method
- use UUID version 4
2019-03-23 23:31:26 -04:00
Tyler Goodlet 4e078368fc Propagate `tractor.run()` logging level to subactors 2019-03-18 21:32:08 -04:00
Tyler Goodlet de8d69c58b Expose `Context` at top level 2019-03-15 19:40:34 -04:00
goodboy 29ffbfe6ca
Merge pull request #63 from chrizzFTD/update_tests_for_windows
Update tests for windows
2019-03-14 21:06:37 -04:00
Christian López Barrón b992dc19e3 moved assert statement for name on try_set_start_method after its autoset 2019-03-13 21:32:45 +11:00
Tyler Goodlet 63d067792c Rename `StreamQueue` to `MsgpackStream`
Prepares for other possible interchange formats plus it wasn't really
a queue, just a TCP stream wrapper + `msgpack` interchange.
2019-03-12 01:22:46 -04:00
Tyler Goodlet b70f4eafcb Flip tests to use `start_method` kwarg 2019-03-08 20:06:16 -05:00
Tyler Goodlet c3daf73112 Document the mp start method more explicitly 2019-03-08 20:01:42 -05:00
Tyler Goodlet dc5cc040e6 Try to support waiting on Windows processes
This pokes around a little in `trio` hazmat but it *should
work* as it piggy backs on the new cross platform subprocess support.

Relates to #59
2019-03-06 21:24:23 -05:00
Tyler Goodlet 483ae42a46 Add a `spawn_method` dynamic fixture 2019-03-06 00:36:37 -05:00
Tyler Goodlet 7014a07986 Add "spawn" start method support
Add full support for using the "spawn" process starting method as per:
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

Add a  `spawn_method` argument to `tractor.run()` for specifying the
desired method explicitly. By default use the "fastest" method available.
On *nix systems this is the original "forkserver" method.

This should be the solution to getting windows support!

Resolves #60
2019-03-06 00:29:07 -05:00
Tyler Goodlet d75739e9c7 Factor process creation into a separate factory
Make a `_spawn` module for encapsulating all the `multiprocessing`
"spawn method" stuff and factor current forkserver steps into it.
2019-03-05 18:52:19 -05:00
Tyler Goodlet 78ddd33e3a Move to `trio.CancelScope` 2019-02-16 14:25:06 -05:00
Tyler Goodlet 02e0c0e1a4 `trio.ClosedResourceError is deprecated 2019-02-16 14:05:24 -05:00
Tyler Goodlet fe1c4dbc4c mpypy and docs fixups 2019-02-16 14:05:03 -05:00
Tyler Goodlet 616192d853 Don't use async gen functions for the stream API
As mentioned in prior commits there's currently a bug in Python that
make async gens **not** task safe. Since this is the core cause of almost
all recent problems, instead implement our own async iterator derivative of
`trio.abc.ReceiveChannel` by wrapping a `trio._channel.MemoryReceiveChannel`.
This fits more natively with the memory channel API in ``trio`` and adds
potentially more flexibility for possible bidirectional inter-actor streaming
in the future.

Huge thanks to @oremanj and of course @njsmith for guidance on this one!
2019-02-15 21:59:42 -05:00
Tyler Goodlet b91d13cfea Use local actor var 2019-02-15 17:11:26 -05:00
Tyler Goodlet 61680b3729 Use a receive mem channel inside portals
For now stop `.aclose()`-ing all async gens on portal close since it can
cause hangs and other weird behaviour if another task operates on the
same instance.

See https://bugs.python.org/issue32526.
2019-02-15 16:27:18 -05:00
Tyler Goodlet f44ac4528a Use mem chan in actor core 2019-02-15 16:23:58 -05:00
Tyler Goodlet 3d0de25f93 Do proper `wrapt` arg extraction for type checking
Use an inner function / closure to properly process required arguments
at call time as is recommended in the `wrap` docs. Do async gen and
arg introspection at decorate time and raise appropriate type errors.
2019-01-25 00:10:13 -05:00
Tyler Goodlet 1b405ab4fe s/tickers/topics 2019-01-23 22:35:59 -05:00
Tyler Goodlet 3b19e15306 Don't allow cancelling a cancel_task() task 2019-01-23 20:01:29 -05:00
Tyler Goodlet 855f959768 Don't log traceback on kb interrupt 2019-01-23 20:00:57 -05:00
Tyler Goodlet 9f41297298 Timeout on remote task cancellation
Turns out you get a bad situation if the target actor who's task you're
trying to cancel has already died (eg. from an external
`KeyboardInterrupt` or other error) and so we need to eventually bail on
the RPC request. Also don't bother closing the channel created in
`open_portal()` manually since the cancel scope should take care of all
that.
2019-01-23 19:20:13 -05:00
Tyler Goodlet 226312042a Fix type annots 2019-01-23 00:41:45 -05:00
Tyler Goodlet b6cc1e8c22 More pub decorator improvements
- when calling the async gen func provided by the user wrap it in
  `@async_generator.aclosing` to ensure correct teardown at cancel time
- expect the gen to yield a dict with topic keys and data values
- add a `packetizer` function argument to the api allowing a user
  to format the data to be published in whatever way desired
- support using the decorator without the parentheses (using default
  arguments)
- use a `wrapt` "adapter" to override the signature presented to the
  `_actor._invoke` inspection machinery
- handle the default case where `tasks` isn't provided; allow only one
  concurrent publisher task
- store task locks in an actor local variable
- add a comprehensive doc string
2019-01-21 09:21:12 -05:00
Tyler Goodlet 97f709cc14 Cancel remote streaming tasks on a local cancel
Use the new `Actor.cancel_task()` api to remotely cancel streaming
tasks spawned by a portal. This guarantees that if an actor is
cancelled all its (remote) portal spawned tasks will be as well.

On portal teardown only cancel all async
generator calls (though we should cancel all RPC requests in general
eventually) and don't close the channel since it may have been passed
in from some other context that wishes to keep it connected. In
`open_portal()` run the message loop shielded so that if the local
task is cancelled, messaging will continue until the internal scope
is cancelled at end of block.
2019-01-21 00:45:54 -05:00
Tyler Goodlet 03e00886da Add `Actor.cancel_task()`
Enable cancelling specific tasks from a peer actor such that when
a actor task or the actor itself is cancelled, remotely spawned tasks
can also be cancelled. In much that same way that you'd expect a node
(task) in the `trio` task tree to cancel any subtasks, actors should
be able to cancel any tasks they spawn in separate processes.

To enable this:
- track rpc tasks in a flat dict keyed by (chan, cid)
- store a `is_complete` event to enable waiting on specific
  tasks to complete
- allow for shielding the msg loop inside an internal cancel scope
  if requested by the caller; there was an issue with `open_portal()`
  where the channel would be torn down because the current task was
  cancelled but we still need messaging to continue until the portal
  block is exited
- throw an error if the arbiter tries to find itself for now
2019-01-21 00:38:07 -05:00