forked from goodboy/tractor
1
0
Fork 0
Commit Graph

1842 Commits (21f633a9005c9ae3dcda6c98dd3893dde65af3fa)

Author SHA1 Message Date
Tyler Goodlet d9aa6119e1 Set cancelled state in cancel method 2018-07-11 22:24:14 -04:00
Tyler Goodlet 1ade5c5fbb Add onc-cancels-all strategy to actor nursery 2018-07-11 22:24:06 -04:00
Tyler Goodlet 25852794a8 Move chan connect helper to ipc mod 2018-07-11 22:20:13 -04:00
Tyler Goodlet bb9309bdf5 Add a cancellation strategy test 2018-07-11 22:20:13 -04:00
Tyler Goodlet bb293905b9 Verify expected non-result under cancellation 2018-07-11 22:20:13 -04:00
Tyler Goodlet 209a6a2096 Add a separate cancel scope for the main task
Cancellation requires that each actor cancel it's spawned subactors
before cancelling its own root (nursery's) cancel scope to avoid breaking
channel connections before kill commands (`Actor.cancel()`) have been sent
off to peers. To solve this, ensure each main task is cancelled to
completion first (which will guarantee that all actor nurseries have
completed their cancellation steps) before cancelling the actor's "core"
tasks under the "root" scope.
2018-07-11 22:20:13 -04:00
Tyler Goodlet 1854471992 Add tests which verify the readme is correct
- steal from `trio` and add a `tractor_test` decorator
- use a random arbiter port to avoid conflicts with locally running
  systems
- add all the (obviously) hilarious readme tests
- add a complex cancellation test which works with
  `trio.move_on_after()`
2018-07-11 22:20:13 -04:00
Tyler Goodlet 49573c9a03 More fixes to do cancellation correctly
Here is a bunch of code tightening to make sure cancellation works even
if recently spawned actors haven't fully started up and the parent is
cancelled.

The fixes include:
- passing the arbiter socket address to each actor
- ensure all spawned actors respect the spawner's log level
- handle process versus final `portal.result()` teardown in multiple
  tasks such that if a proc dies before shipping a result we don't wait
- more detailed debug logging in teardown code paths
- don't store peer connected events in the same `dict` as the peer channels
- if necessary fake main task results on peer channel disconnect
- warn when a `trio.Cancelled` is what causes a nursery to bail
  otherwise error
- store the subactor portal in the nursery for teardown purposes
- add dedicated `Portal.cancel_actor()` which acts as a "hard cancel"
  and never blocks (indefinitely)
- add `Arbiter.unregister_actor()` it's more explicit what's being
  requested
2018-07-11 22:20:13 -04:00
Tyler Goodlet d94be22ef2 Add a "show me the code" test from the readme 2018-07-11 22:20:13 -04:00
Tyler Goodlet 77e34049b8 More fixes after unit testing
- Allow passing in a program-wide `loglevel`
- Add detailed debug logging particularly to do with channel msg processing
  and connection handling
- Don't daemonize subprocesses for now as it prevents use of
  sub-sub-actors (need to solve #6 first)
- Add a `Portal.close()` which just tells the remote actor to tear down
  the channel (for now)
- Add a message to signal the remote `StopAsyncIteration` from an async
  gen such that the client side terminates properly as well
- Make `Actor.cancel()` cancel the channel server first
- Actors *must* complete the arbiter registeration steps before moving
  on with their main taks and rpc handling
- When delivering rpc responses (using the local per caller queue) use
  the blocking interface (`trio.Queue.put()`) to get backpressure
- Properly detect an `partial` wrapped async generators in `_invoke`
2018-07-11 22:20:13 -04:00
Tyler Goodlet 10417303aa Get tests working again
Remove all the `piker` stuff and add some further checks including:
- main task result is returned correctly
- remote errors are raised locally
- remote async generator yields values locally
2018-07-11 22:20:13 -04:00
Tyler Goodlet 36fd75e217 Fix some bugs to get tests working
Fix quite a few little bugs:
- async gen func detection in `_invoke()`
- always cancel channel server on main task exit
- wait for remaining channel peers after unsub from arbiter
- return result from main task(s) all the way up to `tractor.run()`

Also add a `Portal.result()` for getting the final result(s) from the
actor's main task and fix up a bunch of docs.
2018-07-11 22:20:13 -04:00
Tyler Goodlet 2cc03965d8 Add trio plugin for testing 2018-07-11 22:20:13 -04:00
Tyler Goodlet 6163d4e9ea Don't create formatter if no log level set 2018-07-10 17:28:29 -04:00
Tyler Goodlet 9f5555ec21 Add trio framework classifier 2018-07-10 17:27:47 -04:00
Tyler Goodlet 0598c6ad58 Add setup script 2018-07-05 20:57:23 -04:00
Tyler Goodlet d3f01c29bf Add test deps 2018-07-05 20:55:38 -04:00
Tyler Goodlet c85752abd9 Steal piker's logging setup 2018-07-05 19:51:32 -04:00
Tyler Goodlet a2980d88c5 Fix import, but tests don't all work yet 2018-07-05 19:49:21 -04:00
Tyler Goodlet 8df706e535 Rename package dir to tractor 2018-07-05 19:40:36 -04:00
Tyler Goodlet f6080522f9 `tractor.run()` is required for testing now 2018-07-05 16:21:55 -04:00
Tyler Goodlet ae9ab81ff3 Don't bother unsetting the squeue; let errors propogate up 2018-07-05 16:21:55 -04:00
Tyler Goodlet b1ad909c54 Only cancel channel spawned rpc tasks when explicitly notified 2018-07-05 16:21:55 -04:00
Tyler Goodlet 56d3f6cffb Add a working arbiter registry system
Every actor now registers (and unregisters) with the arbiter at
startup/teardown. For now the registry is stored in a plain `dict` in
the arbiter's memory. This makes it possible to easily coordinate actors
started as plain Python processes or via `multiprocessing`.

A whole smörgåsbord of changes was required to accomplish this:
- factor handshake steps into a func
- track *every* channel connected to an actor including multiples to the
  same remote peer (may want to optimize this later)
- handle `trio.ClosedStreamError` gracefully in the message loop
- add an `open_portal` asynccontextmanager which handles channel
  creation, handshaking, and spawning a bg task for msg processing
- add a `start_actor()` for starting in-process actors directly
- add working `get_arbiter()` and `find_actor()` public routines
- `_main` now tries an anonymous channel connect to the stated
  arbiter sockaddr and uses that to determine whether to crown itself
2018-07-05 16:21:55 -04:00
Tyler Goodlet bf08310224 Add StreamQueue.connected() 2018-07-05 16:21:55 -04:00
Tyler Goodlet 82f22b76e5 Arbiter now supports non-empty statespace 2018-07-05 16:21:55 -04:00
Tyler Goodlet fa6f8185b6 Handle kb interrupt gracefully in sub-actors
Fail gracefully (by "aborting") the same way `trio` does. This avoids
ugly sub-proc tracebacks thrown at the console. Unset the local actor
when `tractor._main` completes. Cancel all tasks for a peer channel on
disconnect.
2018-07-05 16:21:55 -04:00
Tyler Goodlet 0aa49dcbdf Support re-entrant calls to `get_arbiter()`
It gets called more then once when using `tractor.run()` and
then `find_actor()`. Also, allow passing in the log level to each
new spawned subactor.
2018-07-05 16:21:55 -04:00
Tyler Goodlet 597546cf7b Drop console logging - messes with other tests 2018-07-05 16:21:55 -04:00
Tyler Goodlet 97865a192a Add an actor spawning test
Test that the actor nursery API and ``tractor.run`` entrypoint work when
the sub-actor's main task is all that is run (i.e. no rpc requests).
2018-07-05 16:21:55 -04:00
Tyler Goodlet e9422fa001 Fix actor nursery __exit__ handling
When an error is raised inside a nursery block (in the local actor)
cancel all spawned children and ensure the error is properly
unsuppressed.

Also,
- change `invoke_cmd` to `send_cmd` and expect the caller to use
  `result_from_q` (avoids implicit blocking for responses that might
  never arrive)
- `nursery.start()` the channel server block such that we wait for the
  underlying listener to spawn before making outbound connections
- cancel the channel server when an actor's main task completes
  (given that `outlive_main == False`)
- raise subactor errors directly in the local actors's msg loop
- enforce that `treat_as_gen` async functions respond with a caller id
  (`cid`) in each yield packet
2018-07-05 16:21:55 -04:00
Tyler Goodlet 2c637db5b7 Add reliable subactor lifetime control 2018-07-05 16:21:55 -04:00
Tyler Goodlet 5c70b4824f Add remote actor error handling and parent re-raising
Command requests are sent out and responses are handled in a "message
loop" where each command is associated with a "caller id" and multiple
cmds and results are multiplexed on the came inter-actor channel. When
a cmd result arrives it is pushed into a local queue and delivered to the
appropriate calling actor's task. Errors from a remote actor are always shipped
in an "error" packet back to their spawning-parent actor such that any error
in a subactor is always raised directly in the parent. Based on the
first response to a cmd (either a 'return' or 'yield' packet) the caller
side portal will retrieve values by wrapping the local response queue in
either of an async function or generator as appropriate.
2018-07-05 16:21:55 -04:00
Tyler Goodlet 4eb014aedc Use trace level for packet contents 2018-07-05 16:21:55 -04:00
Tyler Goodlet bed7e240de Add uid,event attrs to `Channel` 2018-07-05 16:21:55 -04:00
Tyler Goodlet 2e9cbec93c Add a basic `tractor.run()` test 2018-07-05 16:21:55 -04:00
Tyler Goodlet f36bd0f188 Uhhh make everything better 2018-07-05 16:21:55 -04:00
Tyler Goodlet 03c57ceece Add an initial `tractor` price streaming test 2018-07-05 16:21:55 -04:00
Tyler Goodlet 88e176ceff Add a very rough, minimal actor model system
I'm calling it `tractor` (as in trio-actor) for now.
Lots of work to do still as per the many comments!

Relates to #27
2018-07-05 16:21:55 -04:00
Tyler Goodlet 01119545fc IPC primitives improvements
- Rename the `Client` to `Channel`
- Add better `__repr__()`
- use laddr, raddr instead of sockaddr, peer
- don't allow re-entrant `Channel.connect()` calls
- Make `Channel` an async iterable
2018-07-05 16:21:55 -04:00
Tyler Goodlet 968c385c35 Move ipc types into separate module 2018-07-05 16:21:55 -04:00
goodboy 183c14ba79
Initial commit 2018-07-05 16:01:15 -04:00