Previously, on a task cancel request there was no real response other
then the `None` returned from the `Actor._cancel_task()` method and
sometimes this might get lost if the cancel task was cancelled by
a runtime cancel request (i.e. an "actor cancel"). Instead let's try
always checking if the task's cancel scope is cancelled and if so relay
back to the caller a `ContextCancelled` which can then be explicitly
handled by actor nursery machinery as well as individual cancel APIs
(`Portal.cancel_actor()`, and maybe later if we decide to expose the
`tractor.Context` on every `Portal.run()` call).
Also,
- fix up a bunch of cancellation related logging
- add an `Actor.cancel_called` flag much like `trio`'s cancel scope
The remaining errors all have to do with not getting the exact same
format as previous of collected `.run_in_actor()` errors as `MultiError`s.
Not even sure at this point if the whole collect single task results and
bubble should be a thing but trying to keep the support for now I guess.
There's still issues with a hang in the pub sub tests and the one
debugger test has a different outcome due to the root getting the lock
from the breakpoint forever child too quickly.
- go back to raising portal result-that-are-errors in the spawn task
- go back to shielding the nursery close / proc join event
- report any error on this shielded join and relay to nursery handler
method (which should be customizable in the future for alternate
strats then OCA) as well try to collect ria (run in actor) result
- drop async (via nursery) ria result collection, just do it sync with
the soft `proc.wait()` reap immediately after, which should work
presuming that the ipc connection will break on process termination
anyway and it'll mean no multierror to deal with and no cancel scope
to manage on the ria reaper task.
As for `Actor.cancel()` requests, do the same for
`Actor._cancel_task()` but use `_invoke()` to ensure
correct msg transactions with caller. Don't cancel task
cancels on a cancel-all-tasks operation in attempt at
more determinism.
If the root calls `trio.Process.kill()` on immediate child proc teardown
when the child is using pdb, we can get stdstreams clobbering that
results in a pdb++ repl where the user can't see what's been typed. Not
killing such children on cancellation / error seems to resolve this
issue whilst still giving reliable termination. For now, code that
special path until a time it becomes a problem for ensuring zombie
reaps.
Clearly this wasn't developed against a task that spawned just an async
func in `asyncio`.. Fix all that and remove a bunch of unnecessary func
layers. Add provisional support for the target receiving the `to_trio`
and `from_trio` channels and for the @tractor.stream marker.
The function is useful if you want to run the "main process" under
`asyncio`. Until `trio` core wraps this better we'll keep our own copy
in the interim (there's a new "inside-out-guest" mode almost on
mainline so hang tight).
This should mostly maintain top level SC principles for any task spawned
using `tractor.to_asyncio.run()`. When the `asyncio` task completes make
sure to cancel the pertaining `trio` cancel scope and raise any error
that may have resulted.
Resolves#120