Pretty sure this is the final touch to alleviate all our debug lock
headaches! Instead of trying to revert to the "last" handler (as `pdb`
does internally in the stdlib) we always just revert to the handler
`trio` registers during startup. Further this seems to allow cancelling
the root-side locking task if it's detected as stale IFF we only do this
when the root actor is in a "no more IPC peers" state.
Deatz:
- always `._debug.Lock._trio_handler` as the `trio` version, not some
last used handler to make sure we're getting the ctrl-c handling we
want when not in debug mode.
- assign the trio handler in `open_root_actor()`
`._runtime._async_main()` to be sure it's applied in subactors as well
as the root.
- only do debug lock blocking and root-side-locking-task cancels when
a "no peers" condition is detected in the root actor: i.e. no IPC
channels are detected by the root meaning it's impossible any actor
has a sane lock-state ongoing for debug mode.
We can get it during runtime startup and stash on a new
`Lock._trio_handler`. Always at least revert to this handler to
guarantee graceful kbi handling despite mucking about with our own
handler in debug mode.
Turns out this test was being silently ignored due to incorrect usage of
sync opening of our `.open_nursery()` block (with a `with` not `async
with`) and thus was an noop XD
Instead this fixes the test to call a `tractor` discovery built-in
without starting the runtime (which is now done implicitly when a user
opens a nursery) which should result in the prior expected outcome,
a `RuntimeError`.
This is a lingering debugger locking race case we needed to handle:
- child crashes acquires TTY lock in root and attaches to `pdb`
- child IPC goes down such that all channels to the root are broken
/ non-functional.
- root is stuck thinking the child is still in debug even though it
can't be contacted and the child actor machinery hasn't been
cancelled by its parent.
- root get's stuck in deadlock with child since it won't send a cancel
request until the child is finished debugging, but the child can't
unlock the debugger bc IPC is down.
To avoid this scenario add debug lock blocking list via
`._debug.Lock._blocked: set[tuple]` which holds actor uids for any actor
that is detected by the root as having no transport channel connections
with said root (of which at least one should exist if this sub-actor at
some point acquired the debug lock). The root consequently checks this
list for any actor that tries to (re)acquire the lock and blocks with
a `ContextCancelled`. When a debug condition is tested in
`._runtime._invoke` the context's `._enter_debugger_on_cancel` which
is set to `False` if the actor is on the block list in which case the
post-mortem entry is skipped.
Further this adds a root-locking-task side cancel scope to
`Lock._root_local_task_cs_in_debug` which can be cancelled by the root
runtime when a stale lock is detected after all IPC channels for the
actor have been torn down. NOTE: right now we're NOT doing this since it
seems to cause test failures likely due because it may cause pre-mature
cancellation and maybe needs a bit more experimenting?
In the case of a callee-side context cancelling itself it can be handy
to let the caller-side task know (even if through logging) that the
cancel was due to some known reason. Make `.cancel()` accept such
a message on the callee side and have it included in the
`._runtime._invoke()` raised `ContextCancelled` emission.
Also add a `Context._trigger_debugger_on_cancel: bool` flag which can be
set to `False` to avoid the debugger post-mortem crash mode from
engaging on cross-context tasks which cancel themselves for a known
reason (as is needed for blocked tasks in the debug TTY-lock machinery).
Turns out the lifetime mgmt of separate nurseries per delegate manager
is tricky; a new nursery can't be naively allocated on cache-misses since
it may get closed by some early terminating task instead of by the "last
using" consumer task. In theory if we allocate using the same logic as
that used for the last-task-triggers-exit then this should work?
For now just go back to a single global nursery per `_Cache` which still
avoids use of the internal actor service nursery.
Instead of sticking all `trionics.maybe_open_context()` tasks inside the
actor's (root) service nursery, open a unique one per manager function
instance (id).
Further, accept a callable for the `key` such that a user can have
more flexible control on the caching logic and move the
`maybe_open_nursery()` helper out of the portal mod and into this
trionics "managers" module.
Instead of the logic branching create a table `._spawn._methods`
which is used to lookup the desired backend framework (in this case
still only one of `multiprocessing` or `trio`) and make the top level
`.new_proc()` do the lookup and any common logic. Use a `typing.Literal`
to define the lookup table's key set.
Repair and ignore a bunch of type-annot related stuff todo with `mypy`
updates and backend-specific process typing.