forked from goodboy/tractor
1
0
Fork 0
Commit Graph

711 Commits (fd7a2d378a6834b884adabc591a3cd86fecdd101)

Author SHA1 Message Date
Tyler Goodlet fd7a2d378a Avoid mutate during interate error 2021-07-02 16:43:55 -04:00
Tyler Goodlet 4bd583786a Expect context cancelled when we cancel 2021-07-02 16:43:55 -04:00
Tyler Goodlet 06ad9c10b6 Add pre-stream open error conditions 2021-07-02 16:43:55 -04:00
Tyler Goodlet 8d7eacdc02 De-densify some code 2021-07-02 16:43:55 -04:00
Tyler Goodlet c2f6c39f8f Always shield cancel the caller on cancel-causing-errors, add teardown logging 2021-07-02 16:43:55 -04:00
Tyler Goodlet a602de02a9 First try: pack cancelled tracebacks and ship to caller 2021-07-02 16:43:53 -04:00
Tyler Goodlet df3ceffc77 Add temp warning msg for context cancel call 2021-07-02 16:42:56 -04:00
Tyler Goodlet 94fd6e5857 Add some brief todo notes on idea of shielded breakpoint 2021-07-02 16:42:56 -04:00
Tyler Goodlet 589d16dd95 Consider relaying context error via raised-in-scope-nursery task 2021-07-02 16:42:56 -04:00
Tyler Goodlet aace4eae5f Change trace to transport level 2021-07-02 16:42:32 -04:00
Tyler Goodlet 62a81a7e73 Flip "trace" level to "transport" level logging 2021-07-02 16:41:11 -04:00
Tyler Goodlet bd71e49f89 Add fast fail test using the context api 2021-07-02 15:21:52 -04:00
Tyler Goodlet 340d1f6182 Adjust debug tests to accomodate no more root clobbering 2021-07-02 15:21:39 -04:00
Tyler Goodlet 3483ed4e6f Go back to only logging tbs on no debugger 2021-07-02 15:21:17 -04:00
Tyler Goodlet 2f5dc0783f Comment hard-kill-sidestep for now since nursery version covers it? 2021-07-02 15:20:44 -04:00
Tyler Goodlet a06e9d2a9e Go back to only logging crashes if no pdb gets engaged 2021-07-02 15:20:16 -04:00
Tyler Goodlet 3b3abe101c Solve the root-cancels-child-in-tty-lock race
Finally this makes a cancelled root actor nursery not clobber child
tasks which request and lock the root's tty for the debugger repl.

Using an edge triggered event which is set after all fifo-lock-queued
tasks are complete, we can be sure that no lingering child tasks are
going to get interrupted during pdb use and tty lock acquisition.
Further, even if new tasks do queue up to get the lock, the root will
incrementally send cancel msgs to each sub-actor only once the tty is
not locked by a (set of) child request task(s). Add shielding around all
the critical sections where the child attempts to allocate the lock from
the root such that it won't be disrupted from cancel messages from the
root after the acquire lock transaction has started.
2021-07-02 15:13:17 -04:00
Tyler Goodlet 018e138461 Distinguish between a local pdb unlock and the tty unlock in root 2021-07-02 14:55:50 -04:00
Tyler Goodlet 84358e7443 Fix hard kill in debug mode; only do it when debug lock is empty 2021-07-02 14:55:20 -04:00
Tyler Goodlet 34234fb4fc Move some infos to runtime level 2021-07-02 14:54:54 -04:00
Tyler Goodlet ea6c2504c5 Add PDB level and make runtime below info but above debug 2021-07-02 14:54:40 -04:00
Tyler Goodlet b1f13a7002 Move debugger wait inside OCA nursery 2021-07-02 14:54:26 -04:00
Tyler Goodlet 8b19c9ff6e Don't shield debugger status wait; it causes hangs 2021-07-02 14:54:17 -04:00
Tyler Goodlet 5f1efd9eae Catch and delay errors in the root if debugger is active 2021-07-02 14:53:59 -04:00
Tyler Goodlet bd189f75cc Don't shield on root cancel it can causes hangs 2021-07-02 14:51:20 -04:00
Tyler Goodlet 01208739ff Don't kill root's immediate children when in debug
If the root calls `trio.Process.kill()` on immediate child proc teardown
when the child is using pdb, we can get stdstreams clobbering that
results in a pdb++ repl where the user can't see what's been typed. Not
killing such children on cancellation / error seems to resolve this
issue whilst still giving reliable termination. For now, code that
special path until a time it becomes a problem for ensuring zombie
reaps.
2021-07-02 14:49:39 -04:00
Tyler Goodlet 6f19fa3107 Add debug example that causes pdb stdin clobbering 2021-07-02 14:48:59 -04:00
Tyler Goodlet a1603709ab Set stream "end of channel" after shielded check!
Another face palm that was causing serious issues for code that is using
the `.shielded` feature..

Add a bunch more detailed comments for all this subtlety and hopefully
get it right once and for all. Also aggregated the `trio` errors that
should trigger closure inside `.aclose()`, hopefully that's right too.
2021-07-02 11:59:12 -04:00
Tyler Goodlet 78b4eef7ee Don't clobber msg loop mem chan on rx stream close
Revert this change since it really is poking at internals and doesn't
make a lot of sense. If the context is going to be cancelled then the
msg loop will tear down the feed memory channel when ready, we don't
need to be clobbering it and confusing the runtime machinery lol.
2021-07-02 11:59:12 -04:00
Tyler Goodlet 211fb07074 Modernize streaming tests 2021-07-02 11:59:12 -04:00
Tyler Goodlet ae45b5ff1d Speedup the dynamic pubsub test 2021-07-02 11:59:12 -04:00
Tyler Goodlet c542b915d6 Add detailed ``@tractor.context`` cancellation/termination tests 2021-07-02 11:59:12 -04:00
Tyler Goodlet 6bd16749f0 Drop trailing comma 2021-07-02 11:59:12 -04:00
Tyler Goodlet 8f468a8c86 Adjustments for non-frozen context dataclass change 2021-07-02 11:59:12 -04:00
Tyler Goodlet 3fa36f64ac Wait for debugger lock task context termination 2021-07-02 11:59:12 -04:00
Tyler Goodlet be39ff38e4 Fix exception typing 2021-07-02 11:59:12 -04:00
Tyler Goodlet 9cd5d2d7b9 Explicitly formalize context/streaming teardown
Add clear teardown semantics for `Context` such that the remote side
cancellation propagation happens only on error or if client code
explicitly requests it (either by exit flag to `Portal.open_context()`
or by manually calling `Context.cancel()`).  Add `Context.result()`
to wait on and capture the final result from a remote context function;
any lingering msg sequence will be consumed/discarded.

Changes in order to make this possible:
- pass the runtime msg loop's feeder receive channel in to the context
  on the calling (portal opening) side such that a final 'return' msg
  can be waited upon using `Context.result()` which delivers the final
  return value from the callee side `@tractor.context` async function.
- always await a final result from the target context function in
  `Portal.open_context()`'s `__aexit__()` if the context has not
  been (requested to be) cancelled by client code on block exit.
- add an internal `Context._cancel_called` for context "cancel
  requested" tracking (much like `trio`'s cancel scope).
- allow flagging a stream as terminated using an internal
  `._eoc` flag which will mark the stream as stopped for iteration.
- drop `StopAsyncIteration` catching in `.receive()`; it does
  nothing.
2021-07-02 11:59:12 -04:00
Tyler Goodlet 4601c88574 Specially raise a `ContextCancelled` for a task-context rpc 2021-07-02 11:59:07 -04:00
Tyler Goodlet a1488a1773 Expose streaming components at top level 2021-07-02 11:58:45 -04:00
Tyler Goodlet e058506a00 Add a specially handled `ContextCancelled` error 2021-07-02 11:58:44 -04:00
Tyler Goodlet 19a23fefa9 Add a multi-task streaming test 2021-07-02 11:58:01 -04:00
Tyler Goodlet 40ad00ce02 Avoid mutate on iterate race 2021-07-02 11:58:01 -04:00
Tyler Goodlet b3caf846fc Only close recv chan if we get a ref 2021-07-02 11:58:01 -04:00
Tyler Goodlet 40cb3585c1 Add error case 2021-07-02 11:58:01 -04:00
Tyler Goodlet 88dbaff11b Support no arg to `Context.started()` like trio 2021-07-02 11:58:01 -04:00
Tyler Goodlet 3e34f0a374 Fix up var naming and typing 2021-07-02 11:58:01 -04:00
Tyler Goodlet 9e7bed646d Only send stop msg if not received from far end 2021-07-02 11:58:01 -04:00
Tyler Goodlet 0b73a4b61e Expose msg stream types at top level 2021-07-02 11:58:01 -04:00
Tyler Goodlet eb237f24cd Add dynamic pubsub test using new bidir stream apis 2021-07-02 11:58:01 -04:00
Tyler Goodlet 83f1e79fdd Use context for remote debugger locking
A context is the natural fit (vs. a receive stream) for locking the root
proc's tty usage via it's `.started()` sync point. Simplify the
`_breakpoin()` routine to be a simple async func instead of all this
"returning a coroutine" stuff from before we decided that
`tractor.breakpoint()` must be async. Use `runtime` level for locking
logging making it easier to trace.
2021-07-02 11:58:01 -04:00