1
0
Fork 0
Commit Graph

267 Commits (7f8c5cdfe6279f89997552c95a7c20021515f3f8)

Author SHA1 Message Date
Tyler Goodlet 9067bb2a41 Shorten arbiter contact timeout 2020-10-05 11:58:58 -04:00
Tyler Goodlet 29ed065dc4 Ack our inability to hard kill sub-procs 2020-09-28 13:56:42 -04:00
Tyler Goodlet fc2cb610b9 Make "hard kill" just a `Process.terminate()`
It's not like any of this code is really being used anyway since we
aren't indefinitely blocking for cancelled subactors to terminate (yet).
Drop the `do_hard_kill()` bit for now and just rely on the underlying
process api. Oh, and mark the nursery as cancelled asap.
2020-09-28 13:49:45 -04:00
Tyler Goodlet 5dd2d35fc5 Huh, maybe we don't need to block SIGINT
Seems like the request task cancel scope is actually solving all the
deadlock issues and masking SIGINT isn't changing much behaviour at all.
I think let's keep it unmasked for now in case it does turn out useful
in cancelling from unrecoverable states while in debug.
2020-09-28 13:11:22 -04:00
Tyler Goodlet 25e93925b0 Add a cancel scope around child debugger requests
This is needed in order to avoid the deadlock condition where
a child actor is waiting on the root actor's tty lock but it's parent
(possibly the root) is waiting on it to terminate after sending a cancel
request. The solution is simple: create a cancel scope around the
request in the child and always cancel it when a cancel request from the
parent arrives.
2020-09-28 13:02:33 -04:00
Tyler Goodlet 363498b882 Disable SIGINT handling in child processes
There seems to be no good reason not too since our cancellation
machinery/protocol should do this work when the root receives the
signal. This also (hopefully) helps with some debugging race condition
stuff.
2020-09-28 09:24:36 -04:00
Tyler Goodlet f1b242f913 Block SIGINT handling while in the debugger
This seems to prevent a certain class of bugs to do with the root actor
cancelling local tasks and getting into deadlock while children are
trying to acquire the tty lock. I'm not sure it's the best idea yet
since you're pretty much guaranteed to get "stuck" if a child activates
the debugger after the root has been cancelled (at least "stuck" in
terms of SIGINT being ignored). That kinda race condition seems to still
exist somehow: a child can "beat" the root to activating the tty lock
and the parent is stuck waiting on the child to terminate via its
nursery.
2020-09-28 08:54:21 -04:00
Tyler Goodlet 76e1c83161 Add matrix room link 2020-09-24 11:12:45 -04:00
Tyler Goodlet 9e1d9a8ce1 Add an internal context stack
This aids with tearing down resources **after** the crash handling and
debugger have completed. Leaving this internal for now but should
eventually get a public convenience function like
`tractor.context_stack()`.
2020-09-24 10:12:33 -04:00
Tyler Goodlet 09daba4c9c Explicitly handle `debug_mode` flag correctly 2020-09-24 10:12:33 -04:00
Tyler Goodlet 8b6e9f5530 Port to new debug api, set `_is_root` state flag on startup 2020-09-24 10:12:33 -04:00
Tyler Goodlet 150179bfe4 Support entering post mortem on crashes in root actor 2020-09-24 10:12:33 -04:00
Tyler Goodlet 291ecec070 Maybe not sticky by default 2020-09-24 10:12:33 -04:00
Tyler Goodlet bd157e05ef Port to service nursery 2020-09-24 10:12:33 -04:00
Tyler Goodlet fd5fb9241a Sparsen some lines 2020-09-24 10:12:33 -04:00
Tyler Goodlet ebb21b9ba3 Support re-entrant breakpoints
Keep an actor local (bool) flag which determines if there is already
a running debugger instance for the current process. If another task
tries to enter in this case, simply ignore it since allowing entry may
result in a deadlock where the new task will be sync waiting on the
parent stdio lock (a case that will never arrive due to the current
debugger's active use of it).

In the future we may want to allow FIFO queueing of local tasks where
instead of ignoring re-entrant breakpoints we allow tasks to async wait
for debugger release, though not sure the implications of that since
you'd likely want to support switching the debugger to the new task and
that could cause deadlocks where tasks are inter-dependent. It may be
more sane to just error on multiple breakpoint requests within an actor.
2020-09-24 10:12:33 -04:00
Tyler Goodlet f9ef3fc5de Cleanups and more comments 2020-09-24 10:12:33 -04:00
Tyler Goodlet 68773d51fd Always expose the debug module 2020-09-24 10:12:33 -04:00
Tyler Goodlet abaa2f5da0 Drop uneeded `parent_chan_cs()` cancel call 2020-09-24 10:12:33 -04:00
Tyler Goodlet 8eb9a742dd Add multi-process debugging support using `pdbpp`
This is the first step in addressing #113 and the initial support
of #130. Basically this allows (sub)processes to engage the `pdbpp`
debug machinery which read/writes the root actor's tty but only in
a FIFO semaphored way such that no two processes are using it
simultaneously. That means you can have multiple actors enter a trace or
crash and run the debugger in a sensible way without clobbering each
other's access to stdio. It required adding some "tear down hooks" to
a custom `pdbpp.Pdb` type such that we release a child's lock on the
parent on debugger exit (in this case when either of the "continue" or
"quit" commands are issued to the debugger console).

There's some code left commented in anticipation of full support for
issue #130 where we're need to actually capture and feed stdin to the
target (remote) actor which won't necessarily being running on the same
host.
2020-09-24 10:12:10 -04:00
Tyler Goodlet b06d4b023e Add support for "debug mode"
When enabled a crashed actor will connect to the parent with `pdb`
in post mortem mode.
2020-09-24 10:12:10 -04:00
Tyler Goodlet b11e91375c Initial attempt at multi-actor debugging
Allow entering and attaching to a `pdb` instance in a child process.
The current hackery is to have the child make an rpc to the parent and
ask it to hijack stdin, once complete the child enters a `pdb` blocking
method. The parent then relays all stdin input to the child thus
controlling the "remote" debugger.

A few things were added to accomplish this:
- tracking the mapping of subactors to their parent nurseries
- in the root actor, cancelling all nurseries under the root `trio` task
  on cancellation (i.e. `Actor.cancel()`)
- pass a "runtime vars" map down the actor tree for propagating global state
2020-09-24 10:12:10 -04:00
Tyler Goodlet 8c97f7bbb3 Create runtime variables 2020-09-24 10:12:10 -04:00
Tyler Goodlet ec5d443ee5 Always log actor errors 2020-08-13 11:55:22 -04:00
Tyler Goodlet 1ae0efb033 Make rpc_module_paths a list 2020-08-13 11:53:45 -04:00
Tyler Goodlet 8a995beb6a Docs fixes 2020-08-08 22:29:57 -04:00
Tyler Goodlet 292513b353 Module define default accept addr 2020-08-08 20:58:04 -04:00
Tyler Goodlet b3eba00c3a Appease the great mypy 2020-08-08 20:57:43 -04:00
Tyler Goodlet 42be410076 Handle mp accept_addr 2020-08-08 20:27:43 -04:00
Tyler Goodlet 8477d21499 Restructure actor runtime nursery scoping
In an effort acquire more deterministic actor cancellation,
this adds a clearer and more resilient (whilst possibly a bit
slower) internal nursery structure with explicit semantics for
clarifying the task-scope shutdown sequence.

Namely, on cancellation, the explicit steps are now:
- cancel all currently running rpc tasks and wait
  for them to complete
- cancel the channel server and wait for it to complete
- cancel the msg loop for the channel with the immediate parent
- de-register with arbiter if possible
- wait on remaining connections to release
- exit process

To accomplish this add a new nursery called the "service nursery" which
spawns all rpc tasks **instead of using** the "root nursery". The root
is now used solely for async launching the msg loop for the primary
channel with the parent such that it is (nearly) the last thing torn
down on cancellation.

In the future it should also be possible to have `self.cancel()` return
a result to the parent once the runtime is sure that the rest of the
shutdown is atomic; this would allow for a true unbounded shield in
`Portal.cancel_actor()`. This will likely require that the error
handling blocks in `Actor._async_main()` are moved "inside" the root
nursery block such that the msg loop with the parent truly is the last
thing to terminate.
2020-08-08 14:55:41 -04:00
Tyler Goodlet 90c7fa6963 Allow shielding in `open_portal()` 2020-08-08 14:47:52 -04:00
Tyler Goodlet 532429aec9 Harden `trio` spawner process waiting
Always shield waiting for he process and always run
``trio.Process.__aexit__()`` on teardown. This enforces
that shutdown happens to due cancellation triggered inside
the sub-actor instead of the process being killed externally
by the parent.
2020-08-08 14:43:25 -04:00
Tyler Goodlet fe45d99f65 Allow opening a portal through an existing channel 2020-08-07 12:02:06 -04:00
Tyler Goodlet ae8488a578 Always shield de-register step with arbiter 2020-08-07 11:36:26 -04:00
Tyler Goodlet 09ae51900d Better clarify uid comment 2020-08-04 09:52:49 -04:00
Tyler Goodlet 4f92cfe74f Don't `.aclose` `trio` processes until the very end
Trio will kill subprocesses via `Process.__aexit__()` using a `finally:`
block (which, yes, will get triggered on cancellation) so we avoid that
until true process "tear down" since subactors do many things during
graceful shutdown (such as de-registering from the name discovery
system). Oddly this only seems to be an issue during cancellation of
infinite stream consumption.

Resolves #141
2020-08-03 18:57:00 -04:00
Tyler Goodlet ae9016c06a Log on KBI cancelled termination 2020-08-03 18:46:18 -04:00
Tyler Goodlet a24c6bfdd2 Correctly catch cancelled nursery case (purely for logging) 2020-08-03 18:44:50 -04:00
Tyler Goodlet 56b81f07e5 Return `Dict[Tuple, Tuple]` from `.get_registry()` 2020-08-03 18:42:23 -04:00
Tyler Goodlet fbd68d2d91 Allow for tuple keys with std `msgpack` 2020-08-03 18:41:21 -04:00
Tyler Goodlet 639299e6eb Expose a `.get_registry()` method on the arbiter 2020-08-03 15:40:41 -04:00
Guillermo Rodriguez 3e29fcf1ea
Docstring to the top\!, and redundant spaces goodbye\! 2020-07-29 15:39:38 -03:00
Tyler Goodlet 9a40291d4a Repair startup sequence around parent state transfer
In order to have reliable subactor startup we need the following
sequence to take place:
- connect to the parent actor, handshake and receive runtime state
- load exposed modules into memory
- start the channel server up fully using the provided bind address
- finally, start processing new messages from the parent

Add a bunch more comments to clarify all this.
2020-07-28 22:25:22 -04:00
Guillermo Rodriguez 0a5691e0a8
Removed arbiter_addr local, and bind_addr is now passed through channel, in early child actor init. 2020-07-28 11:55:11 -03:00
Guillermo Rodriguez ef053eb070
Added named arguments to child init, and now passing less of them. 2020-07-27 21:05:00 -03:00
Guillermo Rodriguez e5dbf14ec3
Onlt await params in trio mode 2020-07-27 15:20:55 -03:00
Guillermo Rodriguez 2a407be532
Now passing additional initialization parameters through channel early after handshake. 2020-07-27 14:55:37 -03:00
Tyler Goodlet 3c7ec72f8e Fix SIGINT test names 2020-07-26 23:37:44 -04:00
Tyler Goodlet dddbeb0e71 Run Windows on trio and mp backends
The new pure trio spawning backend uses `subprocess` internally which is
also supported on windows so let's run it in CI.
2020-07-25 13:41:48 -04:00
Tyler Goodlet 7c3928f0bf Oh mypy.. 2020-07-24 17:31:24 -04:00
Tyler Goodlet d3acb8d061 Wait on proc before killing stdio 2020-07-24 17:08:52 -04:00
Tyler Goodlet efde3a5773 Simplify the `_child.py` script
We don't really need stdin for anything but passing the entry point and
detaching it seemed to just cause errors on cancellation teardown.
2020-07-24 17:08:52 -04:00
Tyler Goodlet aa620fe61d Use `trio.Process.__aexit__()` and pass the actor uid
Using the context manager interface does some extra teardown beyond simply
calling `.wait()`. Pass the subactor's "uid" on the exec line for
debugging purposes when monitoring the process tree from the OS.
Hard code the child script module path to avoid a double import warning.
2020-07-24 17:08:52 -04:00
Tyler Goodlet 4516febe26 Make sure to wait trio processes on teardown 2020-07-24 17:08:52 -04:00
Tyler Goodlet 0b305fd78a Change spawn method name in `Actor.load_modules()` 2020-07-24 17:08:52 -04:00
Tyler Goodlet 0936bdc592 Add back subactor logging 2020-07-24 17:08:52 -04:00
Guillermo Rodriguez 56463a08df First attempt at removing trip & updating hazmat -> lowlevel 2020-07-24 17:08:52 -04:00
Tyler Goodlet 7c73775474 Force keyword only args in actor spawn methods 2020-07-24 17:06:43 -04:00
Tyler Goodlet 8fbdfd6a3a Add an obnoxious error message on internal failures 2020-07-24 17:06:23 -04:00
Tyler Goodlet 1706791313 Drop entrypoints from `Actor` 2020-07-24 17:04:22 -04:00
Tyler Goodlet 8e32199509 Get entry points reorg without asyncio compat
This is an edit to factor out changes needed for the `asyncio` in guest mode
integration (which currently isn't tested well) so that later more pertinent
changes (which are tested well) can be rebased off of this branch and
merged into mainline sooner. The *infect_asyncio* branch will need to be
rebased onto this branch as well before merge to mainline.
2020-07-24 17:02:03 -04:00
Tyler Goodlet 8054bc7c70 Support "infected asyncio" actors
This is an initial solution for #120.

Allow spawning `asyncio` based actors which run `trio` in guest
mode. This enables spawning `tractor` actors on top of the `asyncio`
event loop whilst still leveraging the SC focused internal actor
supervision machinery. Add a `tractor.to_syncio.run()` api to allow
spawning tasks on the `asyncio` loop from an embedded (remote) `trio`
task and return or stream results all the way back through the `tractor`
IPC system using a very similar api to portals.

One outstanding problem is getting SC around calls to
`asyncio.create_task()`. Currently a task that crashes isn't able to
easily relay the error to the embedded `trio` task without us fully
enforcing the portals based message protocol (which seems superfluous
given the error ref is in process). Further experiments using `anyio`
task groups may alleviate this.
2020-07-24 16:48:06 -04:00
Tyler Goodlet 30f8dd8be4 Pass a `Channel` to `LocalPortal` for compat purposes 2020-02-09 01:59:39 -05:00
Tyler Goodlet 596aca8097 Alias __mp_main__ at import time 2020-02-09 01:07:14 -05:00
Tyler Goodlet 00fc734580 Fix missing `_ctx` define when on Windows 2020-02-07 20:01:41 -05:00
Tyler Goodlet e671cb4f3b Fixup _spawn.py comments to incorporate trip 2020-01-31 12:05:15 -05:00
Tyler Goodlet 8264b7d136 Drop old module loading from abspath cruft 2020-01-31 12:04:46 -05:00
Tyler Goodlet d64508e1a6 Add more detailed docs around nursery logic
The logic in the `ActorNursery` block is critical to cancellation
semantics and in particular, understanding how supervisor strategies are
invoked. Stick in a bunch of explanatory comments to clear up these
details and also prepare to introduce more supervisor strats besides
the current one-cancels-all approach.
2020-01-31 09:50:25 -05:00
Tyler Goodlet 6348121d23 Do __main__ fixups like ``mulitprocessing does``
Instead of hackery trying to map modules manually from the filesystem
let Python do all the work by simply copying what ``multiprocessing``
does to "fixup the __main__ module" in spawned subprocesses. The new
private module ``_mp_fixup_main.py`` is simply cherry picked code from
``multiprocessing.spawn`` which does just that. We only need these
"fixups" when using a backend other then ``multiprocessing``; for
now just when using ``trio_run_in_process``.
2020-01-29 21:14:48 -05:00
Tyler Goodlet 2a4307975d Fix that thing where the first example in your docs is supposed to work
Thanks to @salotz for pointing out that the first example in the docs
was broken. Though it's somewhat embarrassing this might also explain
the problem in #79 and certain issues in #59...

The solution here is to import the target RPC module using the its
unique basename and absolute filepath in the sub-actor that requires it.
Special handling for `__main__` and `__mp_main__` is needed since the
spawned subprocess will have no knowledge about these parent-
-state-specific module variables. Solution: map the modules name to the
respective module file basename in the child process since the module
variables will of course have different values in children.
2020-01-29 12:16:14 -05:00
Tyler Goodlet 43cca122f5 Handle windows in `@tractor_test` as well 2020-01-26 23:44:47 -05:00
Tyler Goodlet b4cb7439a1 Drop useless fork error branch 2020-01-26 22:46:48 -05:00
Tyler Goodlet e57811a602 Fork isn't present on windows... 2020-01-26 22:35:42 -05:00
Tyler Goodlet ecced3d09a Allow choosing the spawn backend per test session
Add a `--spawn-backend` option which can be set to one of {'mp',
'trio_run_in_process'} which will either run the test suite using the
`multiprocessing` or `trio-run-in-process` backend respectively.
Currently trying to run both in the same session can result in hangs
seemingly due to a lack of cleanup of forkservers / resource trackers
from `multiprocessing` which cause broken pipe errors on occasion (no
idea on the details).

For `test_cancellation.py::test_nested_multierrors`, use less nesting
when mp is used since it breaks if we push it too hard with the
whole recursive subprocess spawning thing...
2020-01-26 21:36:08 -05:00
Tyler Goodlet 27c9760f96 Be explicit about the spawning backend default
Set `trio-run-in-process` as the default on *nix systems and
`multiprocessing`'s spawn method on Windows. Enable overriding the
default choice using `tractor._spawn.try_set_start_method()`. Allows
for easy runs of the test suite using a user chosen backend.
2020-01-26 21:13:29 -05:00
Tyler Goodlet bc259b7eab Use trip as default in all tests for now 2020-01-24 00:54:19 -05:00
Tyler Goodlet d9803ca906 Be explicit with the real name for trip 2020-01-24 00:47:01 -05:00
Tyler Goodlet 4837595e36 Fake out mypy again 2020-01-23 01:32:02 -05:00
Tyler Goodlet 4c5a60d06a Don't import trip on Windows 2020-01-23 01:23:26 -05:00
Tyler Goodlet ddbf55768f Try out trip as the default spawn_method on unix for now 2020-01-23 01:15:46 -05:00
Tyler Goodlet 4b0554b61f Type checker fixes 2020-01-21 10:28:32 -05:00
Tyler Goodlet 6c45416016 Drop ActorNusery.wait(); it's no longer necessary really 2020-01-21 10:27:53 -05:00
Tyler Goodlet c074aea030 Support TRIP for process launching
This took a ton of tinkering and a rework of the actor nursery tear down
logic. The main changes include:

- each subprocess is now spawned from inside a trio task
from one of two containing nurseries created in the body of
`tractor.open_nursery()`: one for `run_in_actor()` processes and one for
`start_actor()` "daemons". This is to address the need for
`trio-run-in_process.open_in_process()` opening a nursery which must
be closed from the same task that opened it. Using this same approach
for `multiprocessing` seems to work well. The nurseries are waited in
order (rip actors then daemon actors) during tear down which allows
for avoiding the recursive re-entry of `ActorNursery.wait()` handled
prior.

- pull out all the nested functions / closures that were in
`ActorNursery.wait()` and move into the `_spawn` module such that
that process shutdown logic takes place in each containing task's
code path. This allows for vastly simplifying `.wait()` to just contain an
event trigger which initiates process waiting / result collection.
Likely `.wait()` should just be removed since it can no longer be used
to synchronously wait on the actor nursery.

- drop `ActorNursery.__aenter__()` / `.__atexit__()` and move this
"supervisor" tear down logic into the closing block of `open_nursery()`.
This not only cleans makes the code more comprehensible it also
makes our nursery implementation look more like the one in `trio`.

Resolves #93
2020-01-21 10:27:53 -05:00
Tyler Goodlet 91c3716968 Do module abspath loading in actor init 2020-01-21 10:27:53 -05:00
Tyler Goodlet afa640dcab More trip WIP stuff working.. kinda
Get a few more things working:
- fail reliably when remote module loading goes awry
- do a real hacky job of module loading using `sys.path` stuffsies
- we're still totally borked when trying to spin up and quickly cancel
a bunch of subactors...

It's a small move forward I guess.
2020-01-21 10:27:53 -05:00
Tyler Goodlet 1b7cdfe512 WIP trying out trio_run_in_process 2020-01-21 10:27:53 -05:00
Tyler Goodlet 698951c515 More mypy apeasement on 3.7 2020-01-15 21:06:13 -05:00
Tyler Goodlet e2c9477122 Allow overriding the root logger name
Handy if other dependent projects want to use the logging system but
also want to slap their own root "branding" onto the record prefix.
2019-12-20 16:37:17 -05:00
Tyler Goodlet 79c152fe38 Make latest mpypy happy 2019-12-10 00:55:03 -05:00
Tyler Goodlet 14bfef0df7 Update types for log adapter 2019-12-09 22:10:15 -05:00
Tyler Goodlet cf73283586 Make info object a mapping type
Make the info object a `Mapping` to play nicer with static type
checking. Simplify the task or actor context method lookup using a dict.
2019-12-09 00:03:22 -05:00
Tyler Goodlet 52efbfc2cd Log task and actor names where possible
Prepend the actor and task names in each log emission. This makes
debugging much more sane since you can see from which process and
running task the log message originates from!

Resolves #13
2019-12-01 23:26:25 -05:00
Tyler Goodlet d2a01e8b81 Drop use of `trio.Event.clear()`
Just spin up new events instead; because apparently they're
so cheap (rolls eyes).

Resolves #78
2019-11-23 11:29:23 -05:00
Tyler Goodlet f977d37cee Add nursery self-destruct logic on cancel failure
If a nursery fails to cancel (some sub-actors presumably) then hard kill
the whole process tree to avoid hangs during a catastrophic failure.
This logic may get factored out (and changed) as we introduce custom
supervisor strategies.
2019-11-22 17:11:48 -05:00
Tyler Goodlet 5e056bae71 Expose trio exceptions to `RemoteActorError` 2019-10-30 00:32:10 -04:00
Tyler Goodlet 95e8f3d306 Propagate `trio.MultiError`s up the actor tree
`trio.MultiError` isn't an `Exception` (derived instead from
`BaseException`) so we have to specially catch it in the task
invocation machinery and ship it upwards (like regular errors)
since nurseries running in sub-actors can raise them.
2019-10-28 00:47:06 -04:00
Tyler Goodlet da4796749f Continue hacking the forkserver in Python 3.8
They got all fancy and added shared memory segment tracking and then
had to "generalize" the tracker name...hooray

Fixes #81
2019-10-15 22:37:47 -04:00
Tyler Goodlet 7da95a806d Rename override module 2019-10-14 12:58:10 -04:00
Tyler Goodlet f885b02c73 Validate stream functions at decorate time 2019-03-29 19:10:32 -04:00
Tyler Goodlet 5c0ae47cf5 Fix type annotation 2019-03-26 08:03:12 -04:00