This fixes an previously undetected bug where if an
`.open_channel_from()` spawned task errored the error would not be
propagated to the `trio` side and instead would fail silently with
a console log error. What was most odd is that it only seems easy to
trigger when you put a slight task sleep before the error is raised
(:eyeroll:). This patch adds a few things to address this and just in
general improve iter-task lifetime syncing:
- add `LinkedTaskChannel._trio_exited: bool` a flag set from the `trio`
side when the channel block exits.
- add a `wait_on_aio_task: bool` flag to `translate_aio_errors` which
toggles whether to wait the `asyncio` task termination event on exit.
- cancel the `asyncio` task if the trio side has ended, when
`._trio_exited == True`.
- always close the `trio` mem channel when the task exits such that
the `asyncio` side can error on any next `.send()` call.
Verify that if the `asyncio` side task cancels (itself) that we raise
that `asyncio.CancelledError` on the `trio` side. In the case where
`trio` initiated the cancel whether or not the `asyncio` side ended up
raising `CancelledError` doesn't really matter to us as long as the far
task did indeed terminate.
The underlying issue is actually that a nested `Context` which was
cancelled was overriding the local error that triggered that secondary's
context's cancellation in the first place XD. This test catches that
case.
Relates to https://github.com/pikers/piker/issues/244
This actually catches a lot of bugs to do with stream termination and
``MsgStream.subscribe()`` usage where the underlying stream closes from
the producer side. When this passes the broadcaster logic will have to
ensure non-lossy fan out semantics and closure tracking.
If we make it too fast a nursery with debug mode children can cancel
too fast and causes some test failures. It's likely not a huge deal
anyway since the purpose of this poll/check is for human interaction
and the current delay isn't really that noticeable.
Decrease log levels in the debug module to avoid console noise when in
use. Toss in some more detailed comments around the new debugger lock
points.
It turns out recent improvements have made the debugger too good
so we need to just terminate the continue loop in this test when
we finally see the "spawn error" crash out because the breakpoint
forever case will literally, continue forever XD
Currently if the spawn task is waiting on a daemon actor it is likely in
`await proc.wait()`, however, if the actor nursery is subsequently
cancelled this checkpoint will be abandoned and the hard proc reaping
sequence will execute which results in a up to 3 second wait before
a "hard" system signal is sent to the child. Ideally such
a cancelled-during-daemon-actor-wait condition is instead handled by
first trying to cancel the remote actor using `Portal.cancel_actor()` (a
"graceful" remote cancel request) which should (presuming normal runtime
operation) result in an immediate collection of the process after normal
actor (remotely triggered) runtime cancellation.
The api we've made here is actually closer to `asyncio.gather()` but
with opening async context managers instead of funcs. Use another event
to allow for graceful teardown of children on non-cancellation exits
and add a doc string.
With the new fixes to the trio spawner we can expect that both root
*and* depth > 1 nursery owning actors will now not clobber any children
that are in debug (either via breakpoint or through crashing). The tests
changed now include more checks which ensure the 2nd level parent-ish
actors also bubble up through into `pdb` and don't kill any of their
(crashed) children before they're done themselves debugging.
Follow up to previous commit: extend our simple context test set to
include cancellation via kbi in the parent as well as timeout logic and
testing of the parent opening a stream even though the target actor does
not.
Thanks again to https://github.com/adder46/wrath for discovering this
bug.
Not sure we even have a test for this yet. The main issue discovered by
a user project (https://github.com/adder46/wrath) was that a kbi raised
inside a block like this (with both recv-only and send-recv streams)
would not cancel on the first ctrl-c sent from console and instead
SIGiNT had to be repeatedly sent as many times as there are subactors in
the first level tree. This test catches that as well as just verifies
the basic side-by-side functionality.
Add a couple more tests to check that a parent and sub-task stream can
be lagged and recovered (depending on who's slower). Factor some of the
test machinery into a new ctx mngr to make it all happen.
The whole origin was not having an explicit open/close semantic for
streams. We have that now so this internal mechanic isn't needed and
further our streams become more correct by having `.aclose()` be
independent of cancellation.
We may get multiple re-entries to debugger by `bp_forever` sub-actor
now since the root will incrementally try to cancel it only when the tty
lock is not held.
This resolves and completes #69 allowing all RPC invocation APIs to pass
function references directly instead of explicit `str` names for the
target namespace and function (this is still done implicitly
underneath). This brings us closer to `trio`'s task running API as well
as acknowledges that any inter-host RPC system (and API) will likely
need to be implemented on top of local RPC primitives anyway. Even if
this ends up **not** being true we can always go to "function stubs" as
part of our IAC protocol or, add a new method to do explicit namespace
calls: `.run_from_module()` or whatever everyone votes on.
Resolves#69
Further, this commit drops `Actor.statespace` from the entire system
since a user can easily get this same functionality using module
level variables. Fix docs to match all these changes (luckily mostly
already done due to example scripts referencing).
Add a ``tractor._portal.StreamReceiveChannel.shield_channel()`` context
manager which allows for avoiding the closing of an IPC stream's
underlying channel for the purposes of task re-spawning. Sometimes you
might want to cancel a task consuming a stream but not tear down the IPC
between actors (the default). A common use can might be where the task's
"setup" work might need to be redone but you want to keep the
established portal / channel in tact despite the task restart.
Includes a test.
This appears to demonstrate the same bug found in #156. It looks like
cancelling a subactor with a child, while that child is running sync code,
can result in the child never getting cancelled due to some strange
condition where the internal nurseries aren't being torn down as
expected when a `trio.Cancelled` is raised.
The real issue is if the root nursery gets cancelled prior to
de-registration with the arbiter. This doesn't seem easy to
reproduce by side effect of a KBI however that is how it was
discovered in practise.
There was code from the last de-registration fix PR that I had commented
(to do with shielding arbiter dereg steps in `Actor._async_main()`) because
the block didn't seem to make a difference under infinite streaming
tests. Turns out it **for sure** is needed under certain conditions (likely
if the actor's root nursery is cancelled prior to actor nursery exit).
This was an attempt to simulate the failure mode if you manually close the
stream **before** cancelling the containing **actor**.
More tests to come I guess.
This truly reproduces #141. It turns out the problem only occurs when
we're cancelled in the middle of consuming "infinite streams".
Good news is this tests a lot of edge cases :)
- ease up on first stream test run deadline
- skip streaming tests in CI for mp backend, period
- give up on > 1 depth nested spawning with mp
- completely give up on slow spawning on windows
Verify ctrl-c, as a user would trigger it, properly cancels the actor
tree. This was an issue with `trio-run-in-process` that clearly wasn't
being handled correctly but for sure is now with the plain old
`trio` process spawner.
Resolves#115
Parametrize our docs example test to include all (now fixed) examples
from the `READ.rst`. The examples themselves have been fixed/corrected
to run but they haven't yet been updated in the actual docs. Once #99
lands these example scripts will be directly included in our
documentation so there will be no possibility of presenting incorrect
examples to our users! This technically fixes#108 even though the new
example aren't going to be included directly in our docs until #99
lands.