Commit Graph

699 Commits (8371621e57e37324eba6619a9aa553ab44789a72)

Author SHA1 Message Date
Tyler Goodlet 8371621e57 Expect context cancelled when we cancel 2021-07-05 08:46:09 -04:00
Tyler Goodlet 377b8c163c Add pre-stream open error conditions 2021-07-05 08:46:09 -04:00
Tyler Goodlet 6e75913480 De-densify some code 2021-07-05 08:46:09 -04:00
Tyler Goodlet 6f22ee8621 Always shield cancel the caller on cancel-causing-errors, add teardown logging 2021-07-05 08:46:09 -04:00
Tyler Goodlet 17fca76865 First try: pack cancelled tracebacks and ship to caller 2021-07-05 08:45:57 -04:00
Tyler Goodlet 627f1076d6 Add temp warning msg for context cancel call 2021-07-05 08:45:15 -04:00
Tyler Goodlet ced5d42cd4 Add some brief todo notes on idea of shielded breakpoint 2021-07-05 08:45:15 -04:00
Tyler Goodlet 17dc6aaa2d Consider relaying context error via raised-in-scope-nursery task 2021-07-05 08:45:13 -04:00
Tyler Goodlet 288e2b5db1 Set stream "end of channel" after shielded check!
Another face palm that was causing serious issues for code that is using
the `.shielded` feature..

Add a bunch more detailed comments for all this subtlety and hopefully
get it right once and for all. Also aggregated the `trio` errors that
should trigger closure inside `.aclose()`, hopefully that's right too.
2021-07-05 08:44:25 -04:00
Tyler Goodlet 59c8f72952 Don't clobber msg loop mem chan on rx stream close
Revert this change since it really is poking at internals and doesn't
make a lot of sense. If the context is going to be cancelled then the
msg loop will tear down the feed memory channel when ready, we don't
need to be clobbering it and confusing the runtime machinery lol.
2021-07-05 08:44:25 -04:00
Tyler Goodlet 197d291ba8 Modernize streaming tests 2021-07-05 08:44:25 -04:00
Tyler Goodlet 43ce533dbf Speedup the dynamic pubsub test 2021-07-05 08:44:25 -04:00
Tyler Goodlet f2b1ef3fc9 Add detailed ``@tractor.context`` cancellation/termination tests 2021-07-05 08:44:25 -04:00
Tyler Goodlet 87f1af0d85 Drop trailing comma 2021-07-05 08:44:25 -04:00
Tyler Goodlet 201392a586 Adjustments for non-frozen context dataclass change 2021-07-05 08:44:25 -04:00
Tyler Goodlet 83c4b930dc Wait for debugger lock task context termination 2021-07-05 08:44:25 -04:00
Tyler Goodlet 008314554c Fix exception typing 2021-07-05 08:44:25 -04:00
Tyler Goodlet 0af58522a4 Explicitly formalize context/streaming teardown
Add clear teardown semantics for `Context` such that the remote side
cancellation propagation happens only on error or if client code
explicitly requests it (either by exit flag to `Portal.open_context()`
or by manually calling `Context.cancel()`).  Add `Context.result()`
to wait on and capture the final result from a remote context function;
any lingering msg sequence will be consumed/discarded.

Changes in order to make this possible:
- pass the runtime msg loop's feeder receive channel in to the context
  on the calling (portal opening) side such that a final 'return' msg
  can be waited upon using `Context.result()` which delivers the final
  return value from the callee side `@tractor.context` async function.
- always await a final result from the target context function in
  `Portal.open_context()`'s `__aexit__()` if the context has not
  been (requested to be) cancelled by client code on block exit.
- add an internal `Context._cancel_called` for context "cancel
  requested" tracking (much like `trio`'s cancel scope).
- allow flagging a stream as terminated using an internal
  `._eoc` flag which will mark the stream as stopped for iteration.
- drop `StopAsyncIteration` catching in `.receive()`; it does
  nothing.
2021-07-05 08:44:25 -04:00
Tyler Goodlet f8e2d4007c Specially raise a `ContextCancelled` for a task-context rpc 2021-07-05 08:44:25 -04:00
Tyler Goodlet 7069035f8b Expose streaming components at top level 2021-07-05 08:44:25 -04:00
Tyler Goodlet 79c8b75b5d Add a specially handled `ContextCancelled` error 2021-07-05 08:44:25 -04:00
Tyler Goodlet b3437dacbe Add a multi-task streaming test 2021-07-05 08:44:25 -04:00
Tyler Goodlet 910df139ad Avoid mutate on iterate race 2021-07-05 08:44:25 -04:00
Tyler Goodlet a4a6df5b5a Only close recv chan if we get a ref 2021-07-05 08:44:25 -04:00
Tyler Goodlet 732b9fe63b Add error case 2021-07-05 08:44:25 -04:00
Tyler Goodlet 8017e55b85 Support no arg to `Context.started()` like trio 2021-07-05 08:44:25 -04:00
Tyler Goodlet 20e73c5ce7 Fix up var naming and typing 2021-07-05 08:44:25 -04:00
Tyler Goodlet 18135b46f0 Only send stop msg if not received from far end 2021-07-05 08:44:25 -04:00
Tyler Goodlet bebe26ce4b Expose msg stream types at top level 2021-07-05 08:44:25 -04:00
Tyler Goodlet ddc6c85d60 Add dynamic pubsub test using new bidir stream apis 2021-07-05 08:44:25 -04:00
Tyler Goodlet be022b8e2b Use context for remote debugger locking
A context is the natural fit (vs. a receive stream) for locking the root
proc's tty usage via it's `.started()` sync point. Simplify the
`_breakpoin()` routine to be a simple async func instead of all this
"returning a coroutine" stuff from before we decided that
`tractor.breakpoint()` must be async. Use `runtime` level for locking
logging making it easier to trace.
2021-07-05 08:44:25 -04:00
Tyler Goodlet d59e9edec8 Be more pedantic with error handling 2021-07-05 08:44:25 -04:00
Tyler Goodlet 68d600d7ee Fix typing 2021-07-05 08:44:25 -04:00
Tyler Goodlet 3f1adc0b6f Parametrize with async for style tests 2021-07-05 08:44:25 -04:00
Tyler Goodlet ebf53157c0 Support passing `shield` at stream contruction 2021-07-05 08:44:25 -04:00
Tyler Goodlet 3625a8cd56 Add basic test set 2021-07-05 08:44:25 -04:00
Tyler Goodlet b706cd9d4d Cancel scope on stream consumer completion 2021-07-05 08:44:25 -04:00
Tyler Goodlet babe62a511 Expose `@context` decorator at top level 2021-07-05 08:44:25 -04:00
Tyler Goodlet 4371a0a898 Add initial bi-directional streaming
This mostly adds the api described in
https://github.com/goodboy/tractor/issues/53#issuecomment-806258798

The first draft summary:
- formalize bidir steaming using the `trio.Channel` style interface
  which we derive as a `MsgStream` type.
- add `Portal.open_context()` which provides a `trio.Nursery.start()`
  remote task invocation style for setting up and tearing down tasks
  contexts in remote actors.
- add a distinct `'started'` message to the ipc protocol to facilitate
  `Context.start()` with a first return value.
- for our `ReceiveMsgStream` type, don't cancel the remote task in
  `.aclose()`; this is now done explicitly by the surrounding `Context`
   usage: `Context.cancel()`.
- streams in either direction still use a `'yield'` message keeping the
  proto mostly symmetric without having to worry about which side is the
  caller / portal opener.
- subtlety: only allow sending a `'stop'` message during a 2-way
  streaming context from `ReceiveStream.aclose()`, detailed comment
  with explanation is included.

Relates to #53
2021-07-05 08:44:25 -04:00
Tyler Goodlet 55760b3fe0 Only expect further message in non-name-error first case 2021-07-04 12:55:36 -04:00
Tyler Goodlet 6aab16f877 Drop added logging around root cancel 2021-07-04 11:00:08 -04:00
Tyler Goodlet caa70245e0 Try remapping all broken errs wholesale on windows 2021-07-04 10:47:15 -04:00
Tyler Goodlet 9c9309faf8 Handle race for tty by child actors 2021-07-04 10:25:41 -04:00
Tyler Goodlet 3f75732b02 Remap windows specific connection reset error 2021-07-04 10:25:19 -04:00
Tyler Goodlet 1edf5c2f06 Specially remap TCP 104-connection-reset to `TransportClosed`
Since we currently have no real "discovery protocol" between process
trees, the current naive approach is to check via a connect and drop to
see if a TCP server is bound to a particular address during root actor
startup. This was a historical decision and had no real grounding beyond
taking a simple approach to get something working when the project
was first started.

This  is obviously problematic from an error handling perspective since
we need to be able to avoid such quick connect-and-drops from cancelling
an "arbiter"'s (registry actor's) channel-msg loop machinery (which
would propagate and cancel the actor).

For now we map this particular TCP error, which gets remapped by `trio`
as a `trio.BrokenResourceError` to our own internal `TransportClosed`
which is swallowed by channel message loop processing and indicates
a graceful teardown of the far end actor.
2021-07-03 18:57:54 -04:00
Tyler Goodlet a2d400583f Fix tuple type 2021-07-02 18:10:06 -04:00
Tyler Goodlet 85246d2df3 Benign deps reorg 2021-07-02 11:56:14 -04:00
Tyler Goodlet b372f4c92b Handle top level multierror that presents now? 2021-07-02 11:55:16 -04:00
Tyler Goodlet 32b4ae0603 Accept transport closed error during handshake and msg loop 2021-07-02 11:38:24 -04:00
Tyler Goodlet 80e100f818 Add our own "transport closed" signal
This change some super old (and bad) code from the project's very early
days. For some redic reason i must have thought masking `trio`'s
internal stream / transport errors and a TCP EOF as `StopAsyncIteration`
somehow a good idea. The reality is you probably
want to know the difference between an unexpected transport error
and a simple EOF lol. This begins to resolve that by adding our own
special `TransportClosed` error to signal the "graceful" termination of
a channel's underlying transport. Oh, and this builds on the `msgspec`
integration which helped shed light on the core issues here B)
2021-07-02 11:36:22 -04:00