Commit Graph

1020 Commits (2f854a3e86d898045c3bbf093e0df14e89a2b339)

Author SHA1 Message Date
Tyler Goodlet 983e66b31b Add second implicit-runtime-boot branch 2021-02-24 13:13:45 -05:00
Tyler Goodlet b285db4c58 Factor OCA supervisor into new func 2021-02-24 13:13:38 -05:00
Tyler Goodlet 5ffd2d2ab3 Ignore type checks on stdlib overrides 2021-02-21 14:08:23 -05:00
Tyler Goodlet 7888ef6f01 Fix more stdlib typing issues with latest mypy 2021-02-21 12:48:03 -05:00
Tyler Goodlet 109066dda9 Support sync code breakpointing via built-in
Override `breakpoint()` for sync code making it work
properly with `trio` as per:

https://github.com/python-trio/trio/issues/1155#issuecomment-742964018

Relates to #193
2021-02-21 12:36:00 -05:00
Tyler Goodlet 9f4e497b9c Don't shield proc waits 2021-01-14 18:21:26 -05:00
Tyler Goodlet e546ead2ff Pub sub internals type fixes 2021-01-14 18:20:59 -05:00
Tyler Goodlet 3df001f3a9 Fix msg pub global lock sharing
Using `None` as the default key for a `@msg.pub` can cause conflicts if
there is more then one "taskless" (no tasks={,} passed) pub offered on
an actor... So instead use the first trio "task name" (usually just the
function name) instead thus avoiding this very hard to debug and
understand problem.

Probably should throw in a test but I'm super lazy today.
2021-01-14 18:20:49 -05:00
Tyler Goodlet 5ed5d18ccb Begin rpc_module_paths deprecation 2021-01-08 22:08:45 -05:00
Tyler Goodlet 32b10681a1 Drop tractor.run() from @tractor_test 2021-01-08 20:56:03 -05:00
Tyler Goodlet 41a4de5af2 Use actual task name lel 2021-01-08 20:55:42 -05:00
Tyler Goodlet 59421d9f3a Fix some borked tests 2021-01-08 20:55:11 -05:00
Tyler Goodlet 333ddcf93f Can we ever really appease mypy? 2021-01-03 11:18:31 -05:00
Tyler Goodlet 0bb2163b0c Implicitly open root actor on first nursery use. 2021-01-02 21:39:30 -05:00
Tyler Goodlet bd3059f01b Allow for error bypass 2021-01-02 21:39:30 -05:00
Tyler Goodlet 803152ead5 Use explicit named args 2021-01-02 21:39:30 -05:00
Tyler Goodlet e6245671b0 Use runtime level on attach 2021-01-02 21:38:55 -05:00
goodboy bfe500060f
Merge pull request #181 from goodboy/drop_tractor_run
Deprecate `tractor.run()`
2020-12-28 12:53:04 -05:00
Tyler Goodlet 723fb17394 Add deprecation warning to run() 2020-12-27 13:29:30 -05:00
Tyler Goodlet f05534e472 Re-org root actor startup into context manager
This begins the move to dropping support for `tractor.run()` which we
don't really need since the runtime is started (as it always has been)
from a new sub-task / nursery. Instead this introduces starting the
actor tree through a `open_root_actor()` async context manager which
we'll likely implicitly call (from the root) on the first use of an
actor nursery.

Drop `_actor._start_actor()` and factor its contents into this new api.
Make `run()` and `run_daemon()` use `open_root_actor()` until we decide
to remove them.

Relates to #168 and #177
2020-12-27 13:29:30 -05:00
Tyler Goodlet b040cdc0c9 Add null byte guard from mainline 2020-12-27 13:28:54 -05:00
Tyler Goodlet 6b650c0fe6 Add a "runtime" log level 2020-12-26 15:45:45 -05:00
Tyler Goodlet 0d05a727b6 Use error log level by default 2020-12-25 15:28:32 -05:00
Tyler Goodlet c28ffd8b1c Don't exception log multi-cancels 2020-12-25 15:23:59 -05:00
Tyler Goodlet 5d7a4e2b12 Denoise some common teardown "errors" to warnings. 2020-12-25 15:10:20 -05:00
Tyler Goodlet 8522f90000 Add type annots to exceptions mod
Also add a `is_multi_cancelled()` predicate to test for
`trio.MultiError`s that contain entirely cancel signals.

Resolves #125
2020-12-25 15:07:36 -05:00
Tyler Goodlet 4bf9b27f57 Drop all .statespace refs; it was a silly idea 2020-12-22 19:33:16 -05:00
Tyler Goodlet 9fd3c42eb1 Port inter-process method calls to `Portal.run_from_ns()` 2020-12-22 10:39:47 -05:00
Tyler Goodlet 7134f35d6e Add `Portal.run_from_ns()`
It turns out in order to maintain our sneaky little "call an `Actor`
method in this remote process" we still need the ability to invoke
functions from a namespace. We're currently using a "self" namespace as
a way to do this for internal inter-process method calling.  Either way,
I see no reason not to keep a public method for this invoke style (we
just won't market it) since it is still how the machinery works
underneath.
2020-12-22 10:39:47 -05:00
Tyler Goodlet a668f714d5 Allow passing function refs to `Portal.run()`
This resolves and completes #69 allowing all RPC invocation APIs to pass
function references directly instead of explicit `str` names for the
target namespace and function (this is still done implicitly
underneath).  This brings us closer to `trio`'s task running API as well
as acknowledges that any inter-host RPC system (and API) will likely
need to be implemented on top of local RPC primitives anyway. Even if
this ends up **not** being true we can always go to "function stubs" as
part of our IAC protocol or, add a new method to do explicit namespace
calls: `.run_from_module()` or whatever everyone votes on.

Resolves #69

Further, this commit drops `Actor.statespace` from the entire system
since a user can easily get this same functionality using module
level variables. Fix docs to match all these changes (luckily mostly
already done due to example scripts referencing).
2020-12-21 09:09:55 -05:00
Tyler Goodlet 0d67ce4abc Fix collections type import for py3.10 2020-12-18 17:58:07 -05:00
Tyler Goodlet 797bcc1df2 Handle early timeouts on last debugger test 2020-12-17 13:35:45 -05:00
Tyler Goodlet 201771a521 'Fix mypy, change interal type name to `ReceiveStream`, settle on `.shield()`' 2020-12-17 12:01:49 -05:00
Tyler Goodlet 15ead6b561 Add a way to shield a stream's underlying channel
Add a ``tractor._portal.StreamReceiveChannel.shield_channel()`` context
manager which allows for avoiding the closing of an IPC stream's
underlying channel for the purposes of task re-spawning. Sometimes you
might want to cancel a task consuming a stream but not tear down the IPC
between actors (the default). A common use can might be where the task's
"setup" work might need to be redone but you want to keep the
established portal / channel in tact despite the task restart.

Includes a test.
2020-12-16 21:42:28 -05:00
Tyler Goodlet d497078eb7 Appease 3.8 mypy 2020-12-11 20:04:56 -05:00
Tyler Goodlet e51c2620e5 End the `pdb` SIGINT handling madness
Turns out this is a lower level issue in terms of the stdlib's default
`pdb.Pdb` settings and how they conflict with `trio`s cancellation and
KBI handling. The details are hashed out more thoroughly in
python-trio/trio#1155. Maybe we can get a fix in trio so things are
solved under our feet :)
2020-12-11 00:15:09 -05:00
Tyler Goodlet 12f425137c Drop duplicate project-package name in msg header 2020-11-03 12:15:49 -05:00
Tyler Goodlet 1580cc6fa0 Add explanation to module load error 2020-10-15 23:16:56 -04:00
Tyler Goodlet 5822d38ae4 Set _is_root runtime var in _main() 2020-10-15 23:16:54 -04:00
Tyler Goodlet 3b8684f655 Always call `Actor.cancel()` at end of root's main task
It's simpler and the only real logical difference is logging messages.
This should also give us an overall consistent tear down sequence.
2020-10-14 13:59:57 -04:00
Tyler Goodlet 02a9cac557 Drop remaining warn()s 2020-10-14 13:48:14 -04:00
Tyler Goodlet f60321a35a Always cancel service nursery last
The channel server should be torn down *before* the rpc
task/service nursery. Do this explicitly even in the root's main task
to avoid a strange hang I found in the pubsub tests. Start dropping
the `warnings.warn()` usage.
2020-10-14 13:46:05 -04:00
goodboy 7115d6c3bd
Merge pull request #129 from goodboy/multiproc_debug
Wen? Multiprocessing-native debugger now!
2020-10-14 09:14:03 -04:00
Tyler Goodlet e3c26943ba Support debug mode only on the trio backend 2020-10-13 14:20:44 -04:00
Tyler Goodlet 08ff989631 Add some comments 2020-10-13 11:59:18 -04:00
Tyler Goodlet 573b8fef73 Add better actor cancellation tracking
Add `Actor._cancel_called` and `._cancel_complete` making it possible to
determine whether the actor has started the cancellation sequence and
whether that sequence has fully completed. This allows for blocking in
internal machinery tasks as necessary. Also, always trigger the end of
ongoing rpc tasks even if the last task errors; there's no guarantee the
trio cancellation semantics will guarantee us a nice internal "state"
without this.
2020-10-13 11:48:52 -04:00
Tyler Goodlet c375a2d028 mypy fixes 2020-10-13 11:03:55 -04:00
Tyler Goodlet c41e5c8313 Fix missing await 2020-10-13 00:45:29 -04:00
Tyler Goodlet 79c38b04e7 Report `trio.Cancelled` when exhausting portals..
For reliable remote cancellation we need to "report" `trio.Cancelled`s
(just like any other error) when exhausting a portal such that the
caller can make decisions about cancelling the respective actor if need
be.

Resolves #156
2020-10-12 23:28:36 -04:00
Tyler Goodlet 07112089d0 Add mention subactor uid during locking 2020-10-07 05:53:26 -04:00
Tyler Goodlet d43d367153 Facepalm: tty locking from root doesn't require an extra task 2020-10-05 11:58:58 -04:00
Tyler Goodlet 83a45119e9 Add "root mailbox" contact info passing
Every subactor in the tree now receives the socket (or whatever the
mailbox type ends up being) during startup and can call the new
`tractor._discovery.get_root()` function to get a portal to the current
root actor in their tree. The main reason for adding this atm is to
support nested child actors gaining access to the root's tty lock for
debugging.

Also, when a channel disconnects from a message loop, might as well kill
all its rpc tasks.
2020-10-05 11:58:58 -04:00
Tyler Goodlet a2151cdd4d Allow re-entrant breakpoints during pdb stepping 2020-10-05 11:58:58 -04:00
Tyler Goodlet 9067bb2a41 Shorten arbiter contact timeout 2020-10-05 11:58:58 -04:00
Tyler Goodlet 29ed065dc4 Ack our inability to hard kill sub-procs 2020-09-28 13:56:42 -04:00
Tyler Goodlet fc2cb610b9 Make "hard kill" just a `Process.terminate()`
It's not like any of this code is really being used anyway since we
aren't indefinitely blocking for cancelled subactors to terminate (yet).
Drop the `do_hard_kill()` bit for now and just rely on the underlying
process api. Oh, and mark the nursery as cancelled asap.
2020-09-28 13:49:45 -04:00
Tyler Goodlet 5dd2d35fc5 Huh, maybe we don't need to block SIGINT
Seems like the request task cancel scope is actually solving all the
deadlock issues and masking SIGINT isn't changing much behaviour at all.
I think let's keep it unmasked for now in case it does turn out useful
in cancelling from unrecoverable states while in debug.
2020-09-28 13:11:22 -04:00
Tyler Goodlet 25e93925b0 Add a cancel scope around child debugger requests
This is needed in order to avoid the deadlock condition where
a child actor is waiting on the root actor's tty lock but it's parent
(possibly the root) is waiting on it to terminate after sending a cancel
request. The solution is simple: create a cancel scope around the
request in the child and always cancel it when a cancel request from the
parent arrives.
2020-09-28 13:02:33 -04:00
Tyler Goodlet 363498b882 Disable SIGINT handling in child processes
There seems to be no good reason not too since our cancellation
machinery/protocol should do this work when the root receives the
signal. This also (hopefully) helps with some debugging race condition
stuff.
2020-09-28 09:24:36 -04:00
Tyler Goodlet f1b242f913 Block SIGINT handling while in the debugger
This seems to prevent a certain class of bugs to do with the root actor
cancelling local tasks and getting into deadlock while children are
trying to acquire the tty lock. I'm not sure it's the best idea yet
since you're pretty much guaranteed to get "stuck" if a child activates
the debugger after the root has been cancelled (at least "stuck" in
terms of SIGINT being ignored). That kinda race condition seems to still
exist somehow: a child can "beat" the root to activating the tty lock
and the parent is stuck waiting on the child to terminate via its
nursery.
2020-09-28 08:54:21 -04:00
Tyler Goodlet 76e1c83161 Add matrix room link 2020-09-24 11:12:45 -04:00
Tyler Goodlet 9e1d9a8ce1 Add an internal context stack
This aids with tearing down resources **after** the crash handling and
debugger have completed. Leaving this internal for now but should
eventually get a public convenience function like
`tractor.context_stack()`.
2020-09-24 10:12:33 -04:00
Tyler Goodlet 09daba4c9c Explicitly handle `debug_mode` flag correctly 2020-09-24 10:12:33 -04:00
Tyler Goodlet 8b6e9f5530 Port to new debug api, set `_is_root` state flag on startup 2020-09-24 10:12:33 -04:00
Tyler Goodlet 150179bfe4 Support entering post mortem on crashes in root actor 2020-09-24 10:12:33 -04:00
Tyler Goodlet 291ecec070 Maybe not sticky by default 2020-09-24 10:12:33 -04:00
Tyler Goodlet bd157e05ef Port to service nursery 2020-09-24 10:12:33 -04:00
Tyler Goodlet fd5fb9241a Sparsen some lines 2020-09-24 10:12:33 -04:00
Tyler Goodlet ebb21b9ba3 Support re-entrant breakpoints
Keep an actor local (bool) flag which determines if there is already
a running debugger instance for the current process. If another task
tries to enter in this case, simply ignore it since allowing entry may
result in a deadlock where the new task will be sync waiting on the
parent stdio lock (a case that will never arrive due to the current
debugger's active use of it).

In the future we may want to allow FIFO queueing of local tasks where
instead of ignoring re-entrant breakpoints we allow tasks to async wait
for debugger release, though not sure the implications of that since
you'd likely want to support switching the debugger to the new task and
that could cause deadlocks where tasks are inter-dependent. It may be
more sane to just error on multiple breakpoint requests within an actor.
2020-09-24 10:12:33 -04:00
Tyler Goodlet f9ef3fc5de Cleanups and more comments 2020-09-24 10:12:33 -04:00
Tyler Goodlet 68773d51fd Always expose the debug module 2020-09-24 10:12:33 -04:00
Tyler Goodlet abaa2f5da0 Drop uneeded `parent_chan_cs()` cancel call 2020-09-24 10:12:33 -04:00
Tyler Goodlet 8eb9a742dd Add multi-process debugging support using `pdbpp`
This is the first step in addressing #113 and the initial support
of #130. Basically this allows (sub)processes to engage the `pdbpp`
debug machinery which read/writes the root actor's tty but only in
a FIFO semaphored way such that no two processes are using it
simultaneously. That means you can have multiple actors enter a trace or
crash and run the debugger in a sensible way without clobbering each
other's access to stdio. It required adding some "tear down hooks" to
a custom `pdbpp.Pdb` type such that we release a child's lock on the
parent on debugger exit (in this case when either of the "continue" or
"quit" commands are issued to the debugger console).

There's some code left commented in anticipation of full support for
issue #130 where we're need to actually capture and feed stdin to the
target (remote) actor which won't necessarily being running on the same
host.
2020-09-24 10:12:10 -04:00
Tyler Goodlet b06d4b023e Add support for "debug mode"
When enabled a crashed actor will connect to the parent with `pdb`
in post mortem mode.
2020-09-24 10:12:10 -04:00
Tyler Goodlet b11e91375c Initial attempt at multi-actor debugging
Allow entering and attaching to a `pdb` instance in a child process.
The current hackery is to have the child make an rpc to the parent and
ask it to hijack stdin, once complete the child enters a `pdb` blocking
method. The parent then relays all stdin input to the child thus
controlling the "remote" debugger.

A few things were added to accomplish this:
- tracking the mapping of subactors to their parent nurseries
- in the root actor, cancelling all nurseries under the root `trio` task
  on cancellation (i.e. `Actor.cancel()`)
- pass a "runtime vars" map down the actor tree for propagating global state
2020-09-24 10:12:10 -04:00
Tyler Goodlet 8c97f7bbb3 Create runtime variables 2020-09-24 10:12:10 -04:00
Tyler Goodlet ec5d443ee5 Always log actor errors 2020-08-13 11:55:22 -04:00
Tyler Goodlet 1ae0efb033 Make rpc_module_paths a list 2020-08-13 11:53:45 -04:00
Tyler Goodlet 8a995beb6a Docs fixes 2020-08-08 22:29:57 -04:00
Tyler Goodlet 292513b353 Module define default accept addr 2020-08-08 20:58:04 -04:00
Tyler Goodlet b3eba00c3a Appease the great mypy 2020-08-08 20:57:43 -04:00
Tyler Goodlet 42be410076 Handle mp accept_addr 2020-08-08 20:27:43 -04:00
Tyler Goodlet 8477d21499 Restructure actor runtime nursery scoping
In an effort acquire more deterministic actor cancellation,
this adds a clearer and more resilient (whilst possibly a bit
slower) internal nursery structure with explicit semantics for
clarifying the task-scope shutdown sequence.

Namely, on cancellation, the explicit steps are now:
- cancel all currently running rpc tasks and wait
  for them to complete
- cancel the channel server and wait for it to complete
- cancel the msg loop for the channel with the immediate parent
- de-register with arbiter if possible
- wait on remaining connections to release
- exit process

To accomplish this add a new nursery called the "service nursery" which
spawns all rpc tasks **instead of using** the "root nursery". The root
is now used solely for async launching the msg loop for the primary
channel with the parent such that it is (nearly) the last thing torn
down on cancellation.

In the future it should also be possible to have `self.cancel()` return
a result to the parent once the runtime is sure that the rest of the
shutdown is atomic; this would allow for a true unbounded shield in
`Portal.cancel_actor()`. This will likely require that the error
handling blocks in `Actor._async_main()` are moved "inside" the root
nursery block such that the msg loop with the parent truly is the last
thing to terminate.
2020-08-08 14:55:41 -04:00
Tyler Goodlet 90c7fa6963 Allow shielding in `open_portal()` 2020-08-08 14:47:52 -04:00
Tyler Goodlet 532429aec9 Harden `trio` spawner process waiting
Always shield waiting for he process and always run
``trio.Process.__aexit__()`` on teardown. This enforces
that shutdown happens to due cancellation triggered inside
the sub-actor instead of the process being killed externally
by the parent.
2020-08-08 14:43:25 -04:00
Tyler Goodlet fe45d99f65 Allow opening a portal through an existing channel 2020-08-07 12:02:06 -04:00
Tyler Goodlet ae8488a578 Always shield de-register step with arbiter 2020-08-07 11:36:26 -04:00
Tyler Goodlet 09ae51900d Better clarify uid comment 2020-08-04 09:52:49 -04:00
Tyler Goodlet 4f92cfe74f Don't `.aclose` `trio` processes until the very end
Trio will kill subprocesses via `Process.__aexit__()` using a `finally:`
block (which, yes, will get triggered on cancellation) so we avoid that
until true process "tear down" since subactors do many things during
graceful shutdown (such as de-registering from the name discovery
system). Oddly this only seems to be an issue during cancellation of
infinite stream consumption.

Resolves #141
2020-08-03 18:57:00 -04:00
Tyler Goodlet ae9016c06a Log on KBI cancelled termination 2020-08-03 18:46:18 -04:00
Tyler Goodlet a24c6bfdd2 Correctly catch cancelled nursery case (purely for logging) 2020-08-03 18:44:50 -04:00
Tyler Goodlet 56b81f07e5 Return `Dict[Tuple, Tuple]` from `.get_registry()` 2020-08-03 18:42:23 -04:00
Tyler Goodlet fbd68d2d91 Allow for tuple keys with std `msgpack` 2020-08-03 18:41:21 -04:00
Tyler Goodlet 639299e6eb Expose a `.get_registry()` method on the arbiter 2020-08-03 15:40:41 -04:00
Guillermo Rodriguez 3e29fcf1ea
Docstring to the top\!, and redundant spaces goodbye\! 2020-07-29 15:39:38 -03:00
Tyler Goodlet 9a40291d4a Repair startup sequence around parent state transfer
In order to have reliable subactor startup we need the following
sequence to take place:
- connect to the parent actor, handshake and receive runtime state
- load exposed modules into memory
- start the channel server up fully using the provided bind address
- finally, start processing new messages from the parent

Add a bunch more comments to clarify all this.
2020-07-28 22:25:22 -04:00
Guillermo Rodriguez 0a5691e0a8
Removed arbiter_addr local, and bind_addr is now passed through channel, in early child actor init. 2020-07-28 11:55:11 -03:00
Guillermo Rodriguez ef053eb070
Added named arguments to child init, and now passing less of them. 2020-07-27 21:05:00 -03:00
Guillermo Rodriguez e5dbf14ec3
Onlt await params in trio mode 2020-07-27 15:20:55 -03:00
Guillermo Rodriguez 2a407be532
Now passing additional initialization parameters through channel early after handshake. 2020-07-27 14:55:37 -03:00
Tyler Goodlet 3c7ec72f8e Fix SIGINT test names 2020-07-26 23:37:44 -04:00
Tyler Goodlet dddbeb0e71 Run Windows on trio and mp backends
The new pure trio spawning backend uses `subprocess` internally which is
also supported on windows so let's run it in CI.
2020-07-25 13:41:48 -04:00
Tyler Goodlet 7c3928f0bf Oh mypy.. 2020-07-24 17:31:24 -04:00
Tyler Goodlet d3acb8d061 Wait on proc before killing stdio 2020-07-24 17:08:52 -04:00
Tyler Goodlet efde3a5773 Simplify the `_child.py` script
We don't really need stdin for anything but passing the entry point and
detaching it seemed to just cause errors on cancellation teardown.
2020-07-24 17:08:52 -04:00
Tyler Goodlet aa620fe61d Use `trio.Process.__aexit__()` and pass the actor uid
Using the context manager interface does some extra teardown beyond simply
calling `.wait()`. Pass the subactor's "uid" on the exec line for
debugging purposes when monitoring the process tree from the OS.
Hard code the child script module path to avoid a double import warning.
2020-07-24 17:08:52 -04:00
Tyler Goodlet 4516febe26 Make sure to wait trio processes on teardown 2020-07-24 17:08:52 -04:00
Tyler Goodlet 0b305fd78a Change spawn method name in `Actor.load_modules()` 2020-07-24 17:08:52 -04:00
Tyler Goodlet 0936bdc592 Add back subactor logging 2020-07-24 17:08:52 -04:00
Guillermo Rodriguez 56463a08df First attempt at removing trip & updating hazmat -> lowlevel 2020-07-24 17:08:52 -04:00
Tyler Goodlet 7c73775474 Force keyword only args in actor spawn methods 2020-07-24 17:06:43 -04:00
Tyler Goodlet 8fbdfd6a3a Add an obnoxious error message on internal failures 2020-07-24 17:06:23 -04:00
Tyler Goodlet 1706791313 Drop entrypoints from `Actor` 2020-07-24 17:04:22 -04:00
Tyler Goodlet 8e32199509 Get entry points reorg without asyncio compat
This is an edit to factor out changes needed for the `asyncio` in guest mode
integration (which currently isn't tested well) so that later more pertinent
changes (which are tested well) can be rebased off of this branch and
merged into mainline sooner. The *infect_asyncio* branch will need to be
rebased onto this branch as well before merge to mainline.
2020-07-24 17:02:03 -04:00
Tyler Goodlet 8054bc7c70 Support "infected asyncio" actors
This is an initial solution for #120.

Allow spawning `asyncio` based actors which run `trio` in guest
mode. This enables spawning `tractor` actors on top of the `asyncio`
event loop whilst still leveraging the SC focused internal actor
supervision machinery. Add a `tractor.to_syncio.run()` api to allow
spawning tasks on the `asyncio` loop from an embedded (remote) `trio`
task and return or stream results all the way back through the `tractor`
IPC system using a very similar api to portals.

One outstanding problem is getting SC around calls to
`asyncio.create_task()`. Currently a task that crashes isn't able to
easily relay the error to the embedded `trio` task without us fully
enforcing the portals based message protocol (which seems superfluous
given the error ref is in process). Further experiments using `anyio`
task groups may alleviate this.
2020-07-24 16:48:06 -04:00
Tyler Goodlet 30f8dd8be4 Pass a `Channel` to `LocalPortal` for compat purposes 2020-02-09 01:59:39 -05:00
Tyler Goodlet 596aca8097 Alias __mp_main__ at import time 2020-02-09 01:07:14 -05:00
Tyler Goodlet 00fc734580 Fix missing `_ctx` define when on Windows 2020-02-07 20:01:41 -05:00
Tyler Goodlet e671cb4f3b Fixup _spawn.py comments to incorporate trip 2020-01-31 12:05:15 -05:00
Tyler Goodlet 8264b7d136 Drop old module loading from abspath cruft 2020-01-31 12:04:46 -05:00
Tyler Goodlet d64508e1a6 Add more detailed docs around nursery logic
The logic in the `ActorNursery` block is critical to cancellation
semantics and in particular, understanding how supervisor strategies are
invoked. Stick in a bunch of explanatory comments to clear up these
details and also prepare to introduce more supervisor strats besides
the current one-cancels-all approach.
2020-01-31 09:50:25 -05:00
Tyler Goodlet 6348121d23 Do __main__ fixups like ``mulitprocessing does``
Instead of hackery trying to map modules manually from the filesystem
let Python do all the work by simply copying what ``multiprocessing``
does to "fixup the __main__ module" in spawned subprocesses. The new
private module ``_mp_fixup_main.py`` is simply cherry picked code from
``multiprocessing.spawn`` which does just that. We only need these
"fixups" when using a backend other then ``multiprocessing``; for
now just when using ``trio_run_in_process``.
2020-01-29 21:14:48 -05:00
Tyler Goodlet 2a4307975d Fix that thing where the first example in your docs is supposed to work
Thanks to @salotz for pointing out that the first example in the docs
was broken. Though it's somewhat embarrassing this might also explain
the problem in #79 and certain issues in #59...

The solution here is to import the target RPC module using the its
unique basename and absolute filepath in the sub-actor that requires it.
Special handling for `__main__` and `__mp_main__` is needed since the
spawned subprocess will have no knowledge about these parent-
-state-specific module variables. Solution: map the modules name to the
respective module file basename in the child process since the module
variables will of course have different values in children.
2020-01-29 12:16:14 -05:00
Tyler Goodlet 43cca122f5 Handle windows in `@tractor_test` as well 2020-01-26 23:44:47 -05:00
Tyler Goodlet b4cb7439a1 Drop useless fork error branch 2020-01-26 22:46:48 -05:00
Tyler Goodlet e57811a602 Fork isn't present on windows... 2020-01-26 22:35:42 -05:00
Tyler Goodlet ecced3d09a Allow choosing the spawn backend per test session
Add a `--spawn-backend` option which can be set to one of {'mp',
'trio_run_in_process'} which will either run the test suite using the
`multiprocessing` or `trio-run-in-process` backend respectively.
Currently trying to run both in the same session can result in hangs
seemingly due to a lack of cleanup of forkservers / resource trackers
from `multiprocessing` which cause broken pipe errors on occasion (no
idea on the details).

For `test_cancellation.py::test_nested_multierrors`, use less nesting
when mp is used since it breaks if we push it too hard with the
whole recursive subprocess spawning thing...
2020-01-26 21:36:08 -05:00
Tyler Goodlet 27c9760f96 Be explicit about the spawning backend default
Set `trio-run-in-process` as the default on *nix systems and
`multiprocessing`'s spawn method on Windows. Enable overriding the
default choice using `tractor._spawn.try_set_start_method()`. Allows
for easy runs of the test suite using a user chosen backend.
2020-01-26 21:13:29 -05:00
Tyler Goodlet bc259b7eab Use trip as default in all tests for now 2020-01-24 00:54:19 -05:00
Tyler Goodlet d9803ca906 Be explicit with the real name for trip 2020-01-24 00:47:01 -05:00
Tyler Goodlet 4837595e36 Fake out mypy again 2020-01-23 01:32:02 -05:00
Tyler Goodlet 4c5a60d06a Don't import trip on Windows 2020-01-23 01:23:26 -05:00
Tyler Goodlet ddbf55768f Try out trip as the default spawn_method on unix for now 2020-01-23 01:15:46 -05:00
Tyler Goodlet 4b0554b61f Type checker fixes 2020-01-21 10:28:32 -05:00
Tyler Goodlet 6c45416016 Drop ActorNusery.wait(); it's no longer necessary really 2020-01-21 10:27:53 -05:00
Tyler Goodlet c074aea030 Support TRIP for process launching
This took a ton of tinkering and a rework of the actor nursery tear down
logic. The main changes include:

- each subprocess is now spawned from inside a trio task
from one of two containing nurseries created in the body of
`tractor.open_nursery()`: one for `run_in_actor()` processes and one for
`start_actor()` "daemons". This is to address the need for
`trio-run-in_process.open_in_process()` opening a nursery which must
be closed from the same task that opened it. Using this same approach
for `multiprocessing` seems to work well. The nurseries are waited in
order (rip actors then daemon actors) during tear down which allows
for avoiding the recursive re-entry of `ActorNursery.wait()` handled
prior.

- pull out all the nested functions / closures that were in
`ActorNursery.wait()` and move into the `_spawn` module such that
that process shutdown logic takes place in each containing task's
code path. This allows for vastly simplifying `.wait()` to just contain an
event trigger which initiates process waiting / result collection.
Likely `.wait()` should just be removed since it can no longer be used
to synchronously wait on the actor nursery.

- drop `ActorNursery.__aenter__()` / `.__atexit__()` and move this
"supervisor" tear down logic into the closing block of `open_nursery()`.
This not only cleans makes the code more comprehensible it also
makes our nursery implementation look more like the one in `trio`.

Resolves #93
2020-01-21 10:27:53 -05:00
Tyler Goodlet 91c3716968 Do module abspath loading in actor init 2020-01-21 10:27:53 -05:00
Tyler Goodlet afa640dcab More trip WIP stuff working.. kinda
Get a few more things working:
- fail reliably when remote module loading goes awry
- do a real hacky job of module loading using `sys.path` stuffsies
- we're still totally borked when trying to spin up and quickly cancel
a bunch of subactors...

It's a small move forward I guess.
2020-01-21 10:27:53 -05:00
Tyler Goodlet 1b7cdfe512 WIP trying out trio_run_in_process 2020-01-21 10:27:53 -05:00
Tyler Goodlet 698951c515 More mypy apeasement on 3.7 2020-01-15 21:06:13 -05:00
Tyler Goodlet e2c9477122 Allow overriding the root logger name
Handy if other dependent projects want to use the logging system but
also want to slap their own root "branding" onto the record prefix.
2019-12-20 16:37:17 -05:00
Tyler Goodlet 79c152fe38 Make latest mpypy happy 2019-12-10 00:55:03 -05:00
Tyler Goodlet 14bfef0df7 Update types for log adapter 2019-12-09 22:10:15 -05:00
Tyler Goodlet cf73283586 Make info object a mapping type
Make the info object a `Mapping` to play nicer with static type
checking. Simplify the task or actor context method lookup using a dict.
2019-12-09 00:03:22 -05:00
Tyler Goodlet 52efbfc2cd Log task and actor names where possible
Prepend the actor and task names in each log emission. This makes
debugging much more sane since you can see from which process and
running task the log message originates from!

Resolves #13
2019-12-01 23:26:25 -05:00
Tyler Goodlet d2a01e8b81 Drop use of `trio.Event.clear()`
Just spin up new events instead; because apparently they're
so cheap (rolls eyes).

Resolves #78
2019-11-23 11:29:23 -05:00
Tyler Goodlet f977d37cee Add nursery self-destruct logic on cancel failure
If a nursery fails to cancel (some sub-actors presumably) then hard kill
the whole process tree to avoid hangs during a catastrophic failure.
This logic may get factored out (and changed) as we introduce custom
supervisor strategies.
2019-11-22 17:11:48 -05:00
Tyler Goodlet 5e056bae71 Expose trio exceptions to `RemoteActorError` 2019-10-30 00:32:10 -04:00
Tyler Goodlet 95e8f3d306 Propagate `trio.MultiError`s up the actor tree
`trio.MultiError` isn't an `Exception` (derived instead from
`BaseException`) so we have to specially catch it in the task
invocation machinery and ship it upwards (like regular errors)
since nurseries running in sub-actors can raise them.
2019-10-28 00:47:06 -04:00
Tyler Goodlet da4796749f Continue hacking the forkserver in Python 3.8
They got all fancy and added shared memory segment tracking and then
had to "generalize" the tracker name...hooray

Fixes #81
2019-10-15 22:37:47 -04:00
Tyler Goodlet 7da95a806d Rename override module 2019-10-14 12:58:10 -04:00
Tyler Goodlet f885b02c73 Validate stream functions at decorate time 2019-03-29 19:10:32 -04:00
Tyler Goodlet 5c0ae47cf5 Fix type annotation 2019-03-26 08:03:12 -04:00
Tyler Goodlet e51f84af90 Require explicit marking of non async gen streaming funcs
Add `@tractor.stream` which must be used to denote non async generator
streaming functions which use the `tractor.Context` API to push values.
This enforces a more explicit denotation as well as allows enforcing the
declaration of the `ctx` argument in definitions.
2019-03-25 21:36:13 -04:00
Tyler Goodlet 4ee35038fb Move discovery functions to their own module 2019-03-24 11:37:11 -04:00
Tyler Goodlet 2aa6ffce60 Provide each task's cancel scope to every `Context`
This begins moving toward explicitly decorated "streaming functions"
instead of checking for a `ctx` arg in the signature.

- provide each context with its task's top level `trio.CancelScope`
  such that tasks can cancel themselves explictly if needed via calling
  `Context.cancel_scope()`
- make `Actor.cancel_task()` a private method (`_cancel_task()`) and
  handle remote rpc calls specially such that the caller does not need
  to provide the `chan` argument; non-primitive types can't be passed on
  the wire and we don't want the client actor be require knowledge of
  the channel instance the request is associated with. This also ties into
  how we're tracking tasks right now (`Actor._rpc_tasks` is keyed by the
  call id, a UUID, *plus* the channel).
- make `_do_handshake` a private actor method
- use UUID version 4
2019-03-23 23:31:26 -04:00
Tyler Goodlet 4e078368fc Propagate `tractor.run()` logging level to subactors 2019-03-18 21:32:08 -04:00
Tyler Goodlet de8d69c58b Expose `Context` at top level 2019-03-15 19:40:34 -04:00
goodboy 29ffbfe6ca
Merge pull request #63 from chrizzFTD/update_tests_for_windows
Update tests for windows
2019-03-14 21:06:37 -04:00
Christian López Barrón b992dc19e3 moved assert statement for name on try_set_start_method after its autoset 2019-03-13 21:32:45 +11:00
Tyler Goodlet 63d067792c Rename `StreamQueue` to `MsgpackStream`
Prepares for other possible interchange formats plus it wasn't really
a queue, just a TCP stream wrapper + `msgpack` interchange.
2019-03-12 01:22:46 -04:00
Tyler Goodlet b70f4eafcb Flip tests to use `start_method` kwarg 2019-03-08 20:06:16 -05:00
Tyler Goodlet c3daf73112 Document the mp start method more explicitly 2019-03-08 20:01:42 -05:00
Tyler Goodlet dc5cc040e6 Try to support waiting on Windows processes
This pokes around a little in `trio` hazmat but it *should
work* as it piggy backs on the new cross platform subprocess support.

Relates to #59
2019-03-06 21:24:23 -05:00
Tyler Goodlet 483ae42a46 Add a `spawn_method` dynamic fixture 2019-03-06 00:36:37 -05:00
Tyler Goodlet 7014a07986 Add "spawn" start method support
Add full support for using the "spawn" process starting method as per:
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

Add a  `spawn_method` argument to `tractor.run()` for specifying the
desired method explicitly. By default use the "fastest" method available.
On *nix systems this is the original "forkserver" method.

This should be the solution to getting windows support!

Resolves #60
2019-03-06 00:29:07 -05:00
Tyler Goodlet d75739e9c7 Factor process creation into a separate factory
Make a `_spawn` module for encapsulating all the `multiprocessing`
"spawn method" stuff and factor current forkserver steps into it.
2019-03-05 18:52:19 -05:00
Tyler Goodlet 78ddd33e3a Move to `trio.CancelScope` 2019-02-16 14:25:06 -05:00
Tyler Goodlet 02e0c0e1a4 `trio.ClosedResourceError is deprecated 2019-02-16 14:05:24 -05:00
Tyler Goodlet fe1c4dbc4c mpypy and docs fixups 2019-02-16 14:05:03 -05:00
Tyler Goodlet 616192d853 Don't use async gen functions for the stream API
As mentioned in prior commits there's currently a bug in Python that
make async gens **not** task safe. Since this is the core cause of almost
all recent problems, instead implement our own async iterator derivative of
`trio.abc.ReceiveChannel` by wrapping a `trio._channel.MemoryReceiveChannel`.
This fits more natively with the memory channel API in ``trio`` and adds
potentially more flexibility for possible bidirectional inter-actor streaming
in the future.

Huge thanks to @oremanj and of course @njsmith for guidance on this one!
2019-02-15 21:59:42 -05:00
Tyler Goodlet b91d13cfea Use local actor var 2019-02-15 17:11:26 -05:00
Tyler Goodlet 61680b3729 Use a receive mem channel inside portals
For now stop `.aclose()`-ing all async gens on portal close since it can
cause hangs and other weird behaviour if another task operates on the
same instance.

See https://bugs.python.org/issue32526.
2019-02-15 16:27:18 -05:00
Tyler Goodlet f44ac4528a Use mem chan in actor core 2019-02-15 16:23:58 -05:00
Tyler Goodlet 3d0de25f93 Do proper `wrapt` arg extraction for type checking
Use an inner function / closure to properly process required arguments
at call time as is recommended in the `wrap` docs. Do async gen and
arg introspection at decorate time and raise appropriate type errors.
2019-01-25 00:10:13 -05:00
Tyler Goodlet 1b405ab4fe s/tickers/topics 2019-01-23 22:35:59 -05:00
Tyler Goodlet 3b19e15306 Don't allow cancelling a cancel_task() task 2019-01-23 20:01:29 -05:00
Tyler Goodlet 855f959768 Don't log traceback on kb interrupt 2019-01-23 20:00:57 -05:00
Tyler Goodlet 9f41297298 Timeout on remote task cancellation
Turns out you get a bad situation if the target actor who's task you're
trying to cancel has already died (eg. from an external
`KeyboardInterrupt` or other error) and so we need to eventually bail on
the RPC request. Also don't bother closing the channel created in
`open_portal()` manually since the cancel scope should take care of all
that.
2019-01-23 19:20:13 -05:00
Tyler Goodlet 226312042a Fix type annots 2019-01-23 00:41:45 -05:00
Tyler Goodlet b6cc1e8c22 More pub decorator improvements
- when calling the async gen func provided by the user wrap it in
  `@async_generator.aclosing` to ensure correct teardown at cancel time
- expect the gen to yield a dict with topic keys and data values
- add a `packetizer` function argument to the api allowing a user
  to format the data to be published in whatever way desired
- support using the decorator without the parentheses (using default
  arguments)
- use a `wrapt` "adapter" to override the signature presented to the
  `_actor._invoke` inspection machinery
- handle the default case where `tasks` isn't provided; allow only one
  concurrent publisher task
- store task locks in an actor local variable
- add a comprehensive doc string
2019-01-21 09:21:12 -05:00
Tyler Goodlet 97f709cc14 Cancel remote streaming tasks on a local cancel
Use the new `Actor.cancel_task()` api to remotely cancel streaming
tasks spawned by a portal. This guarantees that if an actor is
cancelled all its (remote) portal spawned tasks will be as well.

On portal teardown only cancel all async
generator calls (though we should cancel all RPC requests in general
eventually) and don't close the channel since it may have been passed
in from some other context that wishes to keep it connected. In
`open_portal()` run the message loop shielded so that if the local
task is cancelled, messaging will continue until the internal scope
is cancelled at end of block.
2019-01-21 00:45:54 -05:00
Tyler Goodlet 03e00886da Add `Actor.cancel_task()`
Enable cancelling specific tasks from a peer actor such that when
a actor task or the actor itself is cancelled, remotely spawned tasks
can also be cancelled. In much that same way that you'd expect a node
(task) in the `trio` task tree to cancel any subtasks, actors should
be able to cancel any tasks they spawn in separate processes.

To enable this:
- track rpc tasks in a flat dict keyed by (chan, cid)
- store a `is_complete` event to enable waiting on specific
  tasks to complete
- allow for shielding the msg loop inside an internal cancel scope
  if requested by the caller; there was an issue with `open_portal()`
  where the channel would be torn down because the current task was
  cancelled but we still need messaging to continue until the portal
  block is exited
- throw an error if the arbiter tries to find itself for now
2019-01-21 00:38:07 -05:00
Tyler Goodlet 251ee177fa Make the `Context` a dataclass 2019-01-20 21:47:08 -05:00
Tyler Goodlet 76f7ae5cf4 Log about the loglevel 2019-01-16 17:09:30 -05:00
Tyler Goodlet c58a6ea80f Fix type annots 2019-01-16 16:50:30 -05:00
Tyler Goodlet fbb6af47f8 Add a pub-sub messaging decorator API
Add a draft pub-sub API `@tractor.msg.pub` which allows
for decorating an asyn generator which can stream topic keyed
dictionaries for delivery to multiple calling / consuming tasks.
2019-01-16 12:19:01 -05:00
Tyler Goodlet 06c908f285 Wrap remote import-time errors just the same 2019-01-12 17:56:22 -05:00
Tyler Goodlet fffddf88dd Change parent type 2019-01-12 17:55:28 -05:00
Tyler Goodlet 7377598683 Properly respect `rpc_module_paths` in `run_in_actor()` 2019-01-12 17:55:08 -05:00
Tyler Goodlet be20e1488b Fix type annotations 2019-01-12 15:32:41 -05:00
Tyler Goodlet 41f2096e86 Adopt `Context` in the RPC core
Instead of chan/cid, whenever a remote function defines a `ctx` argument
name deliver a `Context` instance to the function. This allows remote
funcs to provide async generator like streaming replies (and maybe more
later).

Additionally,
- load actor modules *after* establishing a connection to the spawning
  parent to avoid crashing before the error can be reported upwards
- fix a bug to do with unpacking and raising local internal actor errors
  from received messages
2019-01-12 15:27:38 -05:00
Tyler Goodlet 87a6165430 Add a `Context` type for task/protocol RPC state
This is loosely based off the nanomsg concept found here:
https://nanomsg.github.io/nng/man/v1.1.0/nng_ctx.5
2019-01-12 14:31:17 -05:00
Tyler Goodlet a9932e6c01 Allow passing error type to `unpack_error()` 2019-01-12 13:26:02 -05:00
Tyler Goodlet 5fab61412c Propagate module import and func lookup errors
RPC module/function lookups should not cause the target actor to crash.
This change instead ships the error back to the calling actor allowing
for the remote actor to continue running depending on the caller's
error handling logic. Adds a new `ModuleNotExposed` error to accommodate.
2019-01-01 16:12:34 -05:00
Tyler Goodlet ef23055d12 Use proper typing syntax 2019-01-01 12:14:57 -05:00
Tyler Goodlet eb6e82f577 Close all portal created async gens on shutdown 2018-12-15 02:20:55 -05:00
Tyler Goodlet db85e13657 Use a fifo lock for IPC 2018-12-15 02:20:19 -05:00
Tyler Goodlet d492236f3a Handle broken channels more resiliently on teardown 2018-12-15 02:19:47 -05:00
Tyler Goodlet 32c7a06e6a Cancel remote async gens when `aclose()` is called 2018-12-10 23:13:25 -05:00
Tyler Goodlet 4dccb44c67 Add support for cancelling remote tasks via a msg 2018-12-10 23:12:46 -05:00
goodboy 26ab45636e
Merge pull request #48 from tgoodlet/loglevel_to_tractor_tests
Support `loglevel` fixture injection
2018-11-30 08:34:52 -05:00
Tyler Goodlet a588047ad4 Stop channel based async gen streams on exit
I'm not sure how this ever worked but when a "fake" async gen
(i.e. function with special `chan`, `cid` kwargs) is completed
we need to signal the end of the stream just like with normal
async gens. Also don't fail when trying to remove tasks that were
never tracked.

Fixes #46
2018-11-30 01:24:59 -05:00
Tyler Goodlet f81e802219 Support `loglevel` fixture injection
For `pytest`, support defining a `loglevel` fixture value which will be
passed into internals when using `@tractor_test`.
2018-11-30 01:11:08 -05:00
Tyler Goodlet 512a2f25a2 Expose `tractor_test` in the same way as `trio` 2018-11-26 11:26:04 -05:00
Tyler Goodlet 0879150399 Move `tractor_test` to new module 2018-11-26 11:20:53 -05:00
Tyler Goodlet a482681f9c Leverage `pytest.raises()` better; fix a bunch of docs 2018-11-22 11:43:04 -05:00
Tyler Goodlet 0a240187c6 Log the exception when unable to ship back rpc errors 2018-11-19 16:52:55 -05:00
Tyler Goodlet 82fcf025cc Fix: MultiError isn't an Exception... 2018-11-19 14:16:09 -05:00
Tyler Goodlet 1bb37dbddf Expose trio.MultiError publicly 2018-11-19 14:15:28 -05:00
Tyler Goodlet 9bb8a062eb mypy fixes 2018-11-19 08:47:42 -05:00
Tyler Goodlet 835d1fa07a Vastly improve error triggered cancellation
At the expense of a bit more complexity in `ActorNursery.wait()`
(which I commented the heck out of fwiw) this adds far superior and
correct cancellation semantics for when a nursery is cancelled due
to (remote) errors in subactors.

This includes:
- `wait()` will now raise a `trio.MultiError` if multiple subactors
  error with the same semantics as in `trio`.
- in `wait()` portals which are paired with `run_in_actor()`
  spawned subactors (versus `start_actor()`) are waited on separately
  and if the nursery **hasn't** been cancelled but there are errors
  those are raised immediately before waiting on `start_actor()`
  subactors which will block indefinitely if they haven't been
  explicitly cancelled.
- if `wait()` does raise when the nursery hasn't yet been cancelled
  it's expected that it will be called again depending on the actor
  supervision strategy (i.e. right now we operate with a one-cancels-all
  strategy, the same as `trio`, so `ActorNursery.__aexit__() calls
  `cancel()` if any error is raised by `wait()`).

Oh and I added `is_main_process()` helper; can't remember why..
2018-11-19 08:44:19 -05:00
Tyler Goodlet e75b25dc21 Improve error propagation machinery
Use the new custom error types throughout the actor and portal
primitives and set a few new rules:
- internal errors are any error not raised by an rpc task and are
  **not** forwarded to portals but instead are raised directly in
  the msg loop.
- portals always re-raise a "main task" error for every call to
  ``Portal.result()``.
2018-11-19 04:05:07 -05:00
Tyler Goodlet 2f6609ab78 Add custom exceptions with msg (un)packing 2018-11-19 03:51:12 -05:00
Tyler Goodlet 71bb87aa3a Drop deprecated trio error 2018-11-09 01:53:15 -05:00
Tyler Goodlet 12fa5542b1 Oh, mypy... 2018-11-09 01:52:57 -05:00
Tyler Goodlet 2ce8e06619 Some minor doc/comment tweaks 2018-11-09 01:40:12 -05:00
Tyler Goodlet aa8238d5e0 Revert allowing multiple stream handlers; clutters test output 2018-11-09 01:35:51 -05:00
Tyler Goodlet 109b5971ed Don't overload `func` arg 2018-09-21 10:11:27 -04:00
Tyler Goodlet 2973d7f1de Await async funcs properly in `LocalPortal.run()` 2018-09-21 00:31:30 -04:00
Tyler Goodlet 65beb2d84e Top level actor must have a `main()` now 2018-09-14 16:34:13 -04:00
Tyler Goodlet 716a44b6b8 Better document `run_daemon()` 2018-09-14 16:33:45 -04:00
Tyler Goodlet 827a6c6014 Make `rpc_modules` a positional arg to `tractor.run_daemon()` 2018-09-10 22:31:23 -04:00
Tyler Goodlet d808ffd8f3 `Logger.warn()` is deprecated 2018-09-10 15:19:49 -04:00
Tyler Goodlet ee7959cb55 Fix same named actor race
When an actor has already been registered with the arbiter it should
exist in the registry and thus the wait event should have been removed.
Check that the registry indeed holds an event before clearing it.
2018-09-08 09:40:35 -04:00
Tyler Goodlet 6b8393a4d6 Add `tractor.run_daemon()` for running a main rpc daemon 2018-09-08 09:39:53 -04:00
Tyler Goodlet 0ca668453c Running without a main func is a type error 2018-09-05 18:13:23 -04:00
Tyler Goodlet 438a79707f Couple more type tweaks 2018-09-01 14:43:48 -04:00
Tyler Goodlet 086df43b59 Woot! mypy run is clean! 2018-08-31 17:16:24 -04:00
Tyler Goodlet 18c55e2b5f Type igore `colorlog` 2018-08-26 13:12:59 -04:00
Tyler Goodlet 11cbf9ea55 Use proper `typing` annotations 2018-08-26 13:12:29 -04:00
Tyler Goodlet c3eee1f228 Move "treat_as_gen" detection into `_invoke()` 2018-08-22 11:51:22 -04:00
Tyler Goodlet b0ceb308ba Add type annotations to most functions
This is purely for documentation purposes for now as it should be
obvious a bunch of the signatures aren't using the correct "generics"
syntax (i.e. the use of `(str, int)` instead of `typing.Tuple[str, int])`)
in a bunch of places. We're also not using a type checker yet and besides,
`trio` doesn't really expose a lot of its internal types very well.

2SQASH
2018-08-22 11:50:45 -04:00
Tyler Goodlet 996ad891f4 py3.6 is missing this attr 2018-08-19 16:11:57 -04:00
Tyler Goodlet 328e5bd597 Import our `forkserver.main()` in server cmd
Something changed in 3.7 (likely to do with changes to the core
import system) that requires explicitly importing our version
of `forkserver.main()` in order to guarantee the server runs our
module code. Override `forkserver.ensure_running()`; specifically,
modify the python launch command.
2018-08-19 15:37:01 -04:00
Tyler Goodlet 3202462cd5 Attach remote internal errors to channels
This ensures that internal errors received from a remote actor are
indeed raised even in the `MainProcess` **before** comms tasks are
cancelled. Internal error in this case means any error packet received
on a channel that doesn't have a `cid` header. RPC errors (which **do**
have a `cid` header) are still forwarded to the consuming caller as usual.
2018-08-17 14:49:17 -04:00
Tyler Goodlet 901f99bbec Throw internal errors into the main coroutine
If an internal error is bubbled up from some sub-actor throw that error
into the `MainProcess` "main" async function / coro in order to trigger
nursery teardowns (i.e. cancellations) that need to be done.

I'll likely change this shortly back to where we run a "main task"
inside `actor._async_main()`...
2018-08-16 00:22:16 -04:00
Tyler Goodlet f8111e51cd Maybe wait for actor result(s) after proc join 2018-08-16 00:21:49 -04:00
Tyler Goodlet d4da80c558 Store remote errors on each portal 2018-08-16 00:21:00 -04:00
Tyler Goodlet 73e8aac36c Always allow and enable rpc prior to task start 2018-08-15 01:24:06 -04:00
Tyler Goodlet 09e3a94060 Cancel result waiter once proc terminates 2018-08-15 01:24:06 -04:00
Tyler Goodlet 3f0c644768 Add `tractor.wait_for_actor()` helper
Allows for waiting on another actor (by name) to register with the
arbiter. This makes synchronized actor spawning and consecutive task
coordination easier to accomplish from within sub-actors.

Resolves #31
2018-08-12 23:59:19 -04:00
Tyler Goodlet 1bd5582d8a Register each actor using its unique ID tuple
This allows for registering more then one actor with the same "name"
when you have multiple actors fulfilling the same role. Eventually
we'll need support for looking up all actors registered under a given
"service name" (or whatever we decide to call it).

Also, a fix to the arbiter such that each new instance refers to a
separate `_registry` dict (found an issue with duplicate names during
testing).

Resolves #7
2018-08-07 14:03:01 -04:00
Tyler Goodlet 758fbc6790 Drop `Channel.aiter_recv()`
Internalize the implementation of this and expect client code to
iterate the `Channel` directly.

Resolves #16
2018-08-07 14:03:01 -04:00
Tyler Goodlet bd14cbe082 Port to trio's new resource error 2018-08-07 14:03:01 -04:00
Tyler Goodlet 163f747afb Drop legacy write_unsigned() 2018-08-07 14:02:42 -04:00
Tyler Goodlet e7c7391497 Drop needless tuple unpack 2018-08-04 18:15:24 -04:00
Tyler Goodlet 4fc7edf466 Change class names, use errno constant 2018-08-04 18:15:24 -04:00
Tyler Goodlet 4b875f0ade Be more explicit with naming and stdlib override 2018-08-04 18:15:24 -04:00
Tyler Goodlet 7017f68503 3.6.3 fix - missing older attr 2018-08-04 18:15:24 -04:00
Tyler Goodlet 50517c9488 Manage a `multiprocessing.forkserver` manually
Start a forkserver once in the main (parent-most) process
and pass ipc info (fds) to subprocesses manually such that embedded
calls to `multiprocessing.Process.start()` just work. Note that this
relies on our overridden version of the stdlib's
`multiprocessing.forkserver` module.

Resolves #6
2018-08-04 18:15:24 -04:00
Tyler Goodlet f46d5b2b62 Hackery to override the stdlib's forkserver
The stdlib insists on creating multiple forkservers and semaphore trackers
for each sub-sub-process launched. This isn't ideal since it costs each
`tractor` sub-actor an additional 2 more processes then necessary and is
confusing when viewed as a process tree (eg. via `pstree`).

The majority of the change is simply avoiding the call to
`forkserver.ensure_running()` and `semaphore_tracker.ensure_running()`
in `ForkServer.connect_new_process()` and instead treating the user like
an adult and expecting those calls to be made *once* in the parent most
process (i.e. what `multiprocessing` calls the `MainProcess`).

Really a proper patch should be made against cpython which allows for
similar manual management of the server along with a mechanism to communicate
forkserver and semaphore tracker fd info to sub-processes such that
further calls to `Process.start()` work as expected.

Relates to #6
2018-08-04 18:15:24 -04:00
Tyler Goodlet d6d7fea708 Use plain func for __aiter__() 2018-08-04 18:15:24 -04:00
Tyler Goodlet f7706074a2 Drop needless if check 2018-08-04 18:10:31 -04:00
Tyler Goodlet 1da69b1396 Allow daemonizing top level actor; don't require main func 2018-08-03 09:41:18 -04:00
Tyler Goodlet bb13b79df5 Drop the "main" task via kwarg idea
Stop worrying about a "main task" in each actor and instead add an
additional `ActorNursery.run_in_actor()` method which wraps calls
to create an actor and run a lone RPC task inside it. Note this
adjusts the public API of `ActorNursery.start_actor()` to drop
its `main` kwarg.

The dirty deats of making this possible:
- each spawned RPC task is now tracked with a specific cancel scope such
  that when the actor is cancelled all ongoing responders are cancelled
  before any IPC/channel machinery is closed (turns out that spawning
  new actors from `outlive_main=True` actors was probably borked before
  finally getting this working).
- make each initial RPC response be a packet which describes the
  `functype` (eg. `{'functype': 'asyncfunction'}`) allowing for async
  calls/submissions by client actors (this was required to make
  `run_in_actor()` work - `Portal._submit()` is the new async method).
- hooray we can stop faking "main task" results for daemon actors
- add better handling/raising of internal errors caught in the bowels of
  the `Actor` itself.
- drop the rpc spawning nursery; just use the `Actor._root_nursery`
- only wait on `_no_more_peers` if there are existing peer channels that
  are actually still connected.
- an `ActorNursery.__aexit__()` now implicitly waits on `Portal.result()` on close
  for each `run_in_actor()` spawned actor.
- handle cancelling partial started actors which haven't yet connected
  back to the parent

Resolves #24
2018-08-02 15:24:28 -04:00
Tyler Goodlet 9571f60a6d Expose channel in public api 2018-07-17 11:57:27 -04:00
Tyler Goodlet 64cbb922dc Reorg everything into private modules 2018-07-14 16:09:05 -04:00
Tyler Goodlet 1f85f71534 Use `async_generator`'s `aclosing()` helper
Take @njsmith's advice and properly close actor invoked async generators
using `async_generator.aclosing()` instead of hacking it (as previous)
with a shielded cancel scope.
2018-07-13 22:18:08 -04:00
Tyler Goodlet 2b7bbf32a1 One more super subtle cancellation fix
See python-trio/trio#455 for the deats...
2018-07-13 22:17:33 -04:00
Tyler Goodlet d9aa6119e1 Set cancelled state in cancel method 2018-07-11 22:24:14 -04:00
Tyler Goodlet 1ade5c5fbb Add onc-cancels-all strategy to actor nursery 2018-07-11 22:24:06 -04:00
Tyler Goodlet 25852794a8 Move chan connect helper to ipc mod 2018-07-11 22:20:13 -04:00
Tyler Goodlet 209a6a2096 Add a separate cancel scope for the main task
Cancellation requires that each actor cancel it's spawned subactors
before cancelling its own root (nursery's) cancel scope to avoid breaking
channel connections before kill commands (`Actor.cancel()`) have been sent
off to peers. To solve this, ensure each main task is cancelled to
completion first (which will guarantee that all actor nurseries have
completed their cancellation steps) before cancelling the actor's "core"
tasks under the "root" scope.
2018-07-11 22:20:13 -04:00
Tyler Goodlet 49573c9a03 More fixes to do cancellation correctly
Here is a bunch of code tightening to make sure cancellation works even
if recently spawned actors haven't fully started up and the parent is
cancelled.

The fixes include:
- passing the arbiter socket address to each actor
- ensure all spawned actors respect the spawner's log level
- handle process versus final `portal.result()` teardown in multiple
  tasks such that if a proc dies before shipping a result we don't wait
- more detailed debug logging in teardown code paths
- don't store peer connected events in the same `dict` as the peer channels
- if necessary fake main task results on peer channel disconnect
- warn when a `trio.Cancelled` is what causes a nursery to bail
  otherwise error
- store the subactor portal in the nursery for teardown purposes
- add dedicated `Portal.cancel_actor()` which acts as a "hard cancel"
  and never blocks (indefinitely)
- add `Arbiter.unregister_actor()` it's more explicit what's being
  requested
2018-07-11 22:20:13 -04:00
Tyler Goodlet 77e34049b8 More fixes after unit testing
- Allow passing in a program-wide `loglevel`
- Add detailed debug logging particularly to do with channel msg processing
  and connection handling
- Don't daemonize subprocesses for now as it prevents use of
  sub-sub-actors (need to solve #6 first)
- Add a `Portal.close()` which just tells the remote actor to tear down
  the channel (for now)
- Add a message to signal the remote `StopAsyncIteration` from an async
  gen such that the client side terminates properly as well
- Make `Actor.cancel()` cancel the channel server first
- Actors *must* complete the arbiter registeration steps before moving
  on with their main taks and rpc handling
- When delivering rpc responses (using the local per caller queue) use
  the blocking interface (`trio.Queue.put()`) to get backpressure
- Properly detect an `partial` wrapped async generators in `_invoke`
2018-07-11 22:20:13 -04:00
Tyler Goodlet 36fd75e217 Fix some bugs to get tests working
Fix quite a few little bugs:
- async gen func detection in `_invoke()`
- always cancel channel server on main task exit
- wait for remaining channel peers after unsub from arbiter
- return result from main task(s) all the way up to `tractor.run()`

Also add a `Portal.result()` for getting the final result(s) from the
actor's main task and fix up a bunch of docs.
2018-07-11 22:20:13 -04:00
Tyler Goodlet 6163d4e9ea Don't create formatter if no log level set 2018-07-10 17:28:29 -04:00
Tyler Goodlet c85752abd9 Steal piker's logging setup 2018-07-05 19:51:32 -04:00
Tyler Goodlet 8df706e535 Rename package dir to tractor 2018-07-05 19:40:36 -04:00