Commit Graph

728 Commits (2f7de85ca714654ebee0b5ee2f3e35171f036e40)

Author SHA1 Message Date
Tyler Goodlet 2f7de85ca7 Log error 2021-07-05 09:21:37 -04:00
Tyler Goodlet d55bed0b50 Support asyncio actors with the trio spawner backend 2021-07-05 09:21:37 -04:00
Tyler Goodlet 37693f65f1 Revert removal of `infect_asyncio` in nursery start methods 2021-07-05 09:21:37 -04:00
Tyler Goodlet 50ceceb1d6 Attempt to make mypy happy.. 2021-07-05 09:21:37 -04:00
Tyler Goodlet 53e04a01a9 Add an obnoxious error message on internal failures 2021-07-05 09:21:37 -04:00
Tyler Goodlet c3484f83f0 Wow, fix all the broken async func invoking code..
Clearly this wasn't developed against a task that spawned just an async
func in `asyncio`.. Fix all that and remove a bunch of unnecessary func
layers. Add provisional support for the target receiving the `to_trio`
and `from_trio` channels and for the @tractor.stream marker.
2021-07-05 09:21:37 -04:00
Tyler Goodlet f32d3e1464 Drop entrypoints from `Actor` 2021-07-05 09:21:37 -04:00
Tyler Goodlet e77f5dcdfa Move asyncio guest mode entrypoint to `to_asyncio`
The function is useful if you want to run the "main process" under
`asyncio`. Until `trio` core wraps this better we'll keep our own copy
in the interim (there's a new "inside-out-guest" mode almost on
mainline so hang tight).
2021-07-05 09:21:37 -04:00
Tyler Goodlet 4df633f30e Propagate any spawned `asyncio` task error upwards
This should mostly maintain top level SC principles for any task spawned
using `tractor.to_asyncio.run()`. When the `asyncio` task completes make
sure to cancel the pertaining `trio` cancel scope and raise any error
that may have resulted.

Resolves #120
2021-07-05 09:21:37 -04:00
Tyler Goodlet b733443739 Add a @pub kwarg to allow specifying a "startup response message" 2021-07-05 09:21:37 -04:00
Tyler Goodlet 57de0d6b7b Change trace to transport level 2021-07-05 09:16:20 -04:00
Tyler Goodlet 2dd7c064d3 Flip "trace" level to "transport" level logging 2021-07-05 09:15:16 -04:00
Tyler Goodlet 72a40e82cc Add fast fail test using the context api 2021-07-05 09:15:16 -04:00
Tyler Goodlet f2a1064b17 Adjust debug tests to accomodate no more root clobbering
We may get multiple re-entries to debugger by `bp_forever` sub-actor
now since the root will incrementally try to cancel it only when the tty
lock is not held.
2021-07-05 09:13:12 -04:00
Tyler Goodlet 20b91e2653 Go back to only logging tbs on no debugger 2021-07-05 08:46:53 -04:00
Tyler Goodlet 29cd0cfc63 Comment hard-kill-sidestep for now since nursery version covers it? 2021-07-05 08:46:53 -04:00
Tyler Goodlet 668d785f74 Go back to only logging crashes if no pdb gets engaged 2021-07-05 08:46:53 -04:00
Tyler Goodlet 4800dfe12e Solve the root-cancels-child-in-tty-lock race
Finally this makes a cancelled root actor nursery not clobber child
tasks which request and lock the root's tty for the debugger repl.

Using an edge triggered event which is set after all fifo-lock-queued
tasks are complete, we can be sure that no lingering child tasks are
going to get interrupted during pdb use and tty lock acquisition.
Further, even if new tasks do queue up to get the lock, the root will
incrementally send cancel msgs to each sub-actor only once the tty is
not locked by a (set of) child request task(s). Add shielding around all
the critical sections where the child attempts to allocate the lock from
the root such that it won't be disrupted from cancel messages from the
root after the acquire lock transaction has started.
2021-07-05 08:46:53 -04:00
Tyler Goodlet b83bf062e7 Distinguish between a local pdb unlock and the tty unlock in root 2021-07-05 08:46:53 -04:00
Tyler Goodlet 988574a456 Fix hard kill in debug mode; only do it when debug lock is empty 2021-07-05 08:46:53 -04:00
Tyler Goodlet 7ecad76adf Move some infos to runtime level 2021-07-05 08:46:53 -04:00
Tyler Goodlet f75fe17569 Add PDB level and make runtime below info but above debug 2021-07-05 08:46:53 -04:00
Tyler Goodlet 13077cbdf2 Move debugger wait inside OCA nursery 2021-07-05 08:46:53 -04:00
Tyler Goodlet e03d3c9fa8 Don't shield debugger status wait; it causes hangs 2021-07-05 08:46:53 -04:00
Tyler Goodlet c74c2956e4 Catch and delay errors in the root if debugger is active 2021-07-05 08:46:53 -04:00
Tyler Goodlet 2b4cf6157a Don't shield on root cancel it can causes hangs 2021-07-05 08:46:53 -04:00
Tyler Goodlet 9f1f956902 Don't kill root's immediate children when in debug
If the root calls `trio.Process.kill()` on immediate child proc teardown
when the child is using pdb, we can get stdstreams clobbering that
results in a pdb++ repl where the user can't see what's been typed. Not
killing such children on cancellation / error seems to resolve this
issue whilst still giving reliable termination. For now, code that
special path until a time it becomes a problem for ensuring zombie
reaps.
2021-07-05 08:46:53 -04:00
Tyler Goodlet fcd73568a6 Add debug example that causes pdb stdin clobbering 2021-07-05 08:46:53 -04:00
Tyler Goodlet e1533d35dc Avoid mutate during interate error 2021-07-05 08:46:09 -04:00
Tyler Goodlet 8371621e57 Expect context cancelled when we cancel 2021-07-05 08:46:09 -04:00
Tyler Goodlet 377b8c163c Add pre-stream open error conditions 2021-07-05 08:46:09 -04:00
Tyler Goodlet 6e75913480 De-densify some code 2021-07-05 08:46:09 -04:00
Tyler Goodlet 6f22ee8621 Always shield cancel the caller on cancel-causing-errors, add teardown logging 2021-07-05 08:46:09 -04:00
Tyler Goodlet 17fca76865 First try: pack cancelled tracebacks and ship to caller 2021-07-05 08:45:57 -04:00
Tyler Goodlet 627f1076d6 Add temp warning msg for context cancel call 2021-07-05 08:45:15 -04:00
Tyler Goodlet ced5d42cd4 Add some brief todo notes on idea of shielded breakpoint 2021-07-05 08:45:15 -04:00
Tyler Goodlet 17dc6aaa2d Consider relaying context error via raised-in-scope-nursery task 2021-07-05 08:45:13 -04:00
Tyler Goodlet 288e2b5db1 Set stream "end of channel" after shielded check!
Another face palm that was causing serious issues for code that is using
the `.shielded` feature..

Add a bunch more detailed comments for all this subtlety and hopefully
get it right once and for all. Also aggregated the `trio` errors that
should trigger closure inside `.aclose()`, hopefully that's right too.
2021-07-05 08:44:25 -04:00
Tyler Goodlet 59c8f72952 Don't clobber msg loop mem chan on rx stream close
Revert this change since it really is poking at internals and doesn't
make a lot of sense. If the context is going to be cancelled then the
msg loop will tear down the feed memory channel when ready, we don't
need to be clobbering it and confusing the runtime machinery lol.
2021-07-05 08:44:25 -04:00
Tyler Goodlet 197d291ba8 Modernize streaming tests 2021-07-05 08:44:25 -04:00
Tyler Goodlet 43ce533dbf Speedup the dynamic pubsub test 2021-07-05 08:44:25 -04:00
Tyler Goodlet f2b1ef3fc9 Add detailed ``@tractor.context`` cancellation/termination tests 2021-07-05 08:44:25 -04:00
Tyler Goodlet 87f1af0d85 Drop trailing comma 2021-07-05 08:44:25 -04:00
Tyler Goodlet 201392a586 Adjustments for non-frozen context dataclass change 2021-07-05 08:44:25 -04:00
Tyler Goodlet 83c4b930dc Wait for debugger lock task context termination 2021-07-05 08:44:25 -04:00
Tyler Goodlet 008314554c Fix exception typing 2021-07-05 08:44:25 -04:00
Tyler Goodlet 0af58522a4 Explicitly formalize context/streaming teardown
Add clear teardown semantics for `Context` such that the remote side
cancellation propagation happens only on error or if client code
explicitly requests it (either by exit flag to `Portal.open_context()`
or by manually calling `Context.cancel()`).  Add `Context.result()`
to wait on and capture the final result from a remote context function;
any lingering msg sequence will be consumed/discarded.

Changes in order to make this possible:
- pass the runtime msg loop's feeder receive channel in to the context
  on the calling (portal opening) side such that a final 'return' msg
  can be waited upon using `Context.result()` which delivers the final
  return value from the callee side `@tractor.context` async function.
- always await a final result from the target context function in
  `Portal.open_context()`'s `__aexit__()` if the context has not
  been (requested to be) cancelled by client code on block exit.
- add an internal `Context._cancel_called` for context "cancel
  requested" tracking (much like `trio`'s cancel scope).
- allow flagging a stream as terminated using an internal
  `._eoc` flag which will mark the stream as stopped for iteration.
- drop `StopAsyncIteration` catching in `.receive()`; it does
  nothing.
2021-07-05 08:44:25 -04:00
Tyler Goodlet f8e2d4007c Specially raise a `ContextCancelled` for a task-context rpc 2021-07-05 08:44:25 -04:00
Tyler Goodlet 7069035f8b Expose streaming components at top level 2021-07-05 08:44:25 -04:00
Tyler Goodlet 79c8b75b5d Add a specially handled `ContextCancelled` error 2021-07-05 08:44:25 -04:00