forked from goodboy/tractor
1
0
Fork 0
Commit Graph

586 Commits (0c8f9dbce03472b8579f17732b43882e07e074cf)

Author SHA1 Message Date
Tyler Goodlet a49deb46f1 Revert "Make tests a package (for relative imports)"
This reverts commit 1710b642a5.
2020-10-13 14:42:16 -04:00
Tyler Goodlet 666966097a Revert "Change to relative conftest.py imports"
This reverts commit 2b53c74b1c.
2020-10-13 14:42:02 -04:00
Tyler Goodlet e3c26943ba Support debug mode only on the trio backend 2020-10-13 14:20:44 -04:00
Tyler Goodlet ba52de79e1 Skip quad ex on local mp tests as well 2020-10-13 14:20:19 -04:00
Tyler Goodlet 24ef919334 Skip sync sleep test on mp backend 2020-10-13 14:16:20 -04:00
Tyler Goodlet 08ff989631 Add some comments 2020-10-13 11:59:18 -04:00
Tyler Goodlet 573b8fef73 Add better actor cancellation tracking
Add `Actor._cancel_called` and `._cancel_complete` making it possible to
determine whether the actor has started the cancellation sequence and
whether that sequence has fully completed. This allows for blocking in
internal machinery tasks as necessary. Also, always trigger the end of
ongoing rpc tasks even if the last task errors; there's no guarantee the
trio cancellation semantics will guarantee us a nice internal "state"
without this.
2020-10-13 11:48:52 -04:00
Tyler Goodlet 0ce6d2b55c Add `pexpect` dep for debugger tests 2020-10-13 11:04:16 -04:00
Tyler Goodlet c375a2d028 mypy fixes 2020-10-13 11:03:55 -04:00
Tyler Goodlet 1710b642a5 Make tests a package (for relative imports) 2020-10-13 10:50:21 -04:00
Tyler Goodlet c41e5c8313 Fix missing await 2020-10-13 00:45:29 -04:00
Tyler Goodlet a88a6ba7a3 Add pattern matching to test 2020-10-13 00:36:34 -04:00
Tyler Goodlet 79c38b04e7 Report `trio.Cancelled` when exhausting portals..
For reliable remote cancellation we need to "report" `trio.Cancelled`s
(just like any other error) when exhausting a portal such that the
caller can make decisions about cancelling the respective actor if need
be.

Resolves #156
2020-10-12 23:28:36 -04:00
Tyler Goodlet 0e344eead8 Add a "cancel arrives during a sync sleep in child" test
This appears to demonstrate the same bug found in #156. It looks like
cancelling a subactor with a child, while that child is running sync code,
can result in the child never getting cancelled due to some strange
condition where the internal nurseries aren't being torn down as
expected when a `trio.Cancelled` is raised.
2020-10-12 23:25:22 -04:00
Tyler Goodlet acb4cb0b2b Add test showing issue with child in tty lock when cancelled 2020-10-07 06:08:31 -04:00
Tyler Goodlet 07112089d0 Add mention subactor uid during locking 2020-10-07 05:53:26 -04:00
Tyler Goodlet abf8bb2813 Add a deep nested error propagation test 2020-10-06 09:21:53 -04:00
Tyler Goodlet 2b53c74b1c Change to relative conftest.py imports 2020-10-05 11:58:58 -04:00
Tyler Goodlet 371025947a Add a multi-subactor test where the root errors 2020-10-05 11:58:58 -04:00
Tyler Goodlet d43d367153 Facepalm: tty locking from root doesn't require an extra task 2020-10-05 11:58:58 -04:00
Tyler Goodlet 31c1a32d58 Add re-entrant root breakpoint test; demonstrates a bug.. 2020-10-05 11:58:58 -04:00
Tyler Goodlet 83a45119e9 Add "root mailbox" contact info passing
Every subactor in the tree now receives the socket (or whatever the
mailbox type ends up being) during startup and can call the new
`tractor._discovery.get_root()` function to get a portal to the current
root actor in their tree. The main reason for adding this atm is to
support nested child actors gaining access to the root's tty lock for
debugging.

Also, when a channel disconnects from a message loop, might as well kill
all its rpc tasks.
2020-10-05 11:58:58 -04:00
Tyler Goodlet e387e8b322 Add a multi-subactor test with nesting 2020-10-05 11:58:58 -04:00
Tyler Goodlet a2151cdd4d Allow re-entrant breakpoints during pdb stepping 2020-10-05 11:58:58 -04:00
Tyler Goodlet 73a32f7d9c Add initial subactor debug tests 2020-10-05 11:58:58 -04:00
Tyler Goodlet 9067bb2a41 Shorten arbiter contact timeout 2020-10-05 11:58:58 -04:00
Tyler Goodlet 0a2a94fee0 Add initial root actor debugger tests 2020-10-05 11:58:58 -04:00
Tyler Goodlet 29ed065dc4 Ack our inability to hard kill sub-procs 2020-09-28 13:56:42 -04:00
Tyler Goodlet fc2cb610b9 Make "hard kill" just a `Process.terminate()`
It's not like any of this code is really being used anyway since we
aren't indefinitely blocking for cancelled subactors to terminate (yet).
Drop the `do_hard_kill()` bit for now and just rely on the underlying
process api. Oh, and mark the nursery as cancelled asap.
2020-09-28 13:49:45 -04:00
Tyler Goodlet d7a472c7f2 Update our debugging example to wait on results 2020-09-28 13:13:53 -04:00
Tyler Goodlet 5dd2d35fc5 Huh, maybe we don't need to block SIGINT
Seems like the request task cancel scope is actually solving all the
deadlock issues and masking SIGINT isn't changing much behaviour at all.
I think let's keep it unmasked for now in case it does turn out useful
in cancelling from unrecoverable states while in debug.
2020-09-28 13:11:22 -04:00
Tyler Goodlet 25e93925b0 Add a cancel scope around child debugger requests
This is needed in order to avoid the deadlock condition where
a child actor is waiting on the root actor's tty lock but it's parent
(possibly the root) is waiting on it to terminate after sending a cancel
request. The solution is simple: create a cancel scope around the
request in the child and always cancel it when a cancel request from the
parent arrives.
2020-09-28 13:02:33 -04:00
Tyler Goodlet 363498b882 Disable SIGINT handling in child processes
There seems to be no good reason not too since our cancellation
machinery/protocol should do this work when the root receives the
signal. This also (hopefully) helps with some debugging race condition
stuff.
2020-09-28 09:24:36 -04:00
Tyler Goodlet f1b242f913 Block SIGINT handling while in the debugger
This seems to prevent a certain class of bugs to do with the root actor
cancelling local tasks and getting into deadlock while children are
trying to acquire the tty lock. I'm not sure it's the best idea yet
since you're pretty much guaranteed to get "stuck" if a child activates
the debugger after the root has been cancelled (at least "stuck" in
terms of SIGINT being ignored). That kinda race condition seems to still
exist somehow: a child can "beat" the root to activating the tty lock
and the parent is stuck waiting on the child to terminate via its
nursery.
2020-09-28 08:54:21 -04:00
goodboy ce5c52905d
Merge pull request #154 from goodboy/matrix
Add matrix room link
2020-09-24 13:05:35 -04:00
Tyler Goodlet 76e1c83161 Add matrix room link 2020-09-24 11:12:45 -04:00
Tyler Goodlet 9e1d9a8ce1 Add an internal context stack
This aids with tearing down resources **after** the crash handling and
debugger have completed. Leaving this internal for now but should
eventually get a public convenience function like
`tractor.context_stack()`.
2020-09-24 10:12:33 -04:00
Tyler Goodlet 09daba4c9c Explicitly handle `debug_mode` flag correctly 2020-09-24 10:12:33 -04:00
Tyler Goodlet 8b6e9f5530 Port to new debug api, set `_is_root` state flag on startup 2020-09-24 10:12:33 -04:00
Tyler Goodlet 150179bfe4 Support entering post mortem on crashes in root actor 2020-09-24 10:12:33 -04:00
Tyler Goodlet 291ecec070 Maybe not sticky by default 2020-09-24 10:12:33 -04:00
Tyler Goodlet bd157e05ef Port to service nursery 2020-09-24 10:12:33 -04:00
Tyler Goodlet fd5fb9241a Sparsen some lines 2020-09-24 10:12:33 -04:00
Tyler Goodlet ebb21b9ba3 Support re-entrant breakpoints
Keep an actor local (bool) flag which determines if there is already
a running debugger instance for the current process. If another task
tries to enter in this case, simply ignore it since allowing entry may
result in a deadlock where the new task will be sync waiting on the
parent stdio lock (a case that will never arrive due to the current
debugger's active use of it).

In the future we may want to allow FIFO queueing of local tasks where
instead of ignoring re-entrant breakpoints we allow tasks to async wait
for debugger release, though not sure the implications of that since
you'd likely want to support switching the debugger to the new task and
that could cause deadlocks where tasks are inter-dependent. It may be
more sane to just error on multiple breakpoint requests within an actor.
2020-09-24 10:12:33 -04:00
Tyler Goodlet f9ef3fc5de Cleanups and more comments 2020-09-24 10:12:33 -04:00
Tyler Goodlet 68773d51fd Always expose the debug module 2020-09-24 10:12:33 -04:00
Tyler Goodlet abaa2f5da0 Drop uneeded `parent_chan_cs()` cancel call 2020-09-24 10:12:33 -04:00
Tyler Goodlet efd7095cf8 Add pdbpp as dep 2020-09-24 10:12:32 -04:00
Tyler Goodlet f7cd2be039 Play with re-entrant trace 2020-09-24 10:12:10 -04:00
Tyler Goodlet 8eb9a742dd Add multi-process debugging support using `pdbpp`
This is the first step in addressing #113 and the initial support
of #130. Basically this allows (sub)processes to engage the `pdbpp`
debug machinery which read/writes the root actor's tty but only in
a FIFO semaphored way such that no two processes are using it
simultaneously. That means you can have multiple actors enter a trace or
crash and run the debugger in a sensible way without clobbering each
other's access to stdio. It required adding some "tear down hooks" to
a custom `pdbpp.Pdb` type such that we release a child's lock on the
parent on debugger exit (in this case when either of the "continue" or
"quit" commands are issued to the debugger console).

There's some code left commented in anticipation of full support for
issue #130 where we're need to actually capture and feed stdin to the
target (remote) actor which won't necessarily being running on the same
host.
2020-09-24 10:12:10 -04:00