forked from goodboy/tractor
1
0
Fork 0

Compare commits

...

35 Commits

Author SHA1 Message Date
Tyler Goodlet d04a754655 Drop old (and deluded) "streaming" cruft 2021-10-27 16:06:44 -04:00
Tyler Goodlet 6e9562ffd8 Facepalm, re-raise captured `asyncio` task error 2021-10-27 16:06:44 -04:00
Tyler Goodlet 9c8a120ccc Handle depth > 1 nursery owners which use debug mode 2021-10-27 16:06:44 -04:00
Tyler Goodlet a0b885e9d7 First draft: `.to_asyncio.open_channel_from()` 2021-10-27 16:06:44 -04:00
Tyler Goodlet b7d5ff79b4 Always cancel the asyncio task? 2021-10-27 16:06:44 -04:00
Tyler Goodlet 60b892cc80 WIP, add back in root shield, print out pdb sigint opts 2021-10-27 16:06:44 -04:00
Tyler Goodlet cbc18f94ec Drop old implementation cruft 2021-10-27 16:06:44 -04:00
Tyler Goodlet e016884ce0 Fix error propagation on asyncio streaming tasks 2021-10-27 16:06:44 -04:00
Tyler Goodlet d42f628597 Drop bad .close() call 2021-10-27 16:06:44 -04:00
Tyler Goodlet a572fde6c6 Proxy asyncio cancelleds as well 2021-10-27 16:06:44 -04:00
Tyler Goodlet 8805cbee3d Don't kill root's immediate children when in debug
If the root calls `trio.Process.kill()` on immediate child proc teardown
when the child is using pdb, we can get stdstreams clobbering that
results in a pdb++ repl where the user can't see what's been typed. Not
killing such children on cancellation / error seems to resolve this
issue whilst still giving reliable termination. For now, code that
special path until a time it becomes a problem for ensuring zombie
reaps.
2021-10-27 16:06:44 -04:00
Tyler Goodlet 004722914b WIP redo asyncio async gen streaming 2021-10-27 16:06:44 -04:00
Tyler Goodlet 1e2d43b0d6 Support asyncio actors with the trio spawner backend 2021-10-27 16:06:44 -04:00
Tyler Goodlet 948be9b069 Support sync code breakpointing via built-in
Override `breakpoint()` for sync code making it work
properly with `trio` as per:

https://github.com/python-trio/trio/issues/1155#issuecomment-742964018

Relates to #193
2021-10-27 16:06:44 -04:00
Tyler Goodlet 2fbc4caacb Support asyncio actors with the trio spawner backend 2021-10-27 16:06:44 -04:00
Tyler Goodlet 2ea020310d Link to SC on wikipedia 2021-10-27 16:06:44 -04:00
Tyler Goodlet 2c52530ef0 Add per actor debug mode toggle 2021-10-27 16:06:44 -04:00
Tyler Goodlet ae682c0f19 Support sync code breakpointing via built-in
Override `breakpoint()` for sync code making it work
properly with `trio` as per:

https://github.com/python-trio/trio/issues/1155#issuecomment-742964018

Relates to #193
2021-10-27 16:06:44 -04:00
Tyler Goodlet cabc41fcfa Pass func refs 2021-10-27 16:06:44 -04:00
Tyler Goodlet 7b149dd3e4 Add initial infected asyncio error propagation test 2021-10-27 16:06:44 -04:00
Tyler Goodlet 2cf98d5e09 Raise any asyncio errors if in trio task on cancel 2021-10-27 16:06:44 -04:00
Tyler Goodlet 81cd21c04b Raise from asyncio error; fixes mypy 2021-10-27 16:06:44 -04:00
Tyler Goodlet bba03654ea Tweak log msg 2021-10-27 16:06:44 -04:00
Tyler Goodlet d337a54fb6 Log error 2021-10-27 16:06:44 -04:00
Tyler Goodlet ac5b3628d2 Support asyncio actors with the trio spawner backend 2021-10-27 16:06:44 -04:00
Tyler Goodlet a6b6241176 Revert removal of `infect_asyncio` in nursery start methods 2021-10-27 16:06:44 -04:00
Tyler Goodlet e54ddac5b9 Attempt to make mypy happy.. 2021-10-27 16:06:44 -04:00
Tyler Goodlet d17774b2b0 Add an obnoxious error message on internal failures 2021-10-27 16:06:44 -04:00
Tyler Goodlet 5c31281557 Wow, fix all the broken async func invoking code..
Clearly this wasn't developed against a task that spawned just an async
func in `asyncio`.. Fix all that and remove a bunch of unnecessary func
layers. Add provisional support for the target receiving the `to_trio`
and `from_trio` channels and for the @tractor.stream marker.
2021-10-27 16:06:44 -04:00
Tyler Goodlet dee6c987a9 Drop entrypoints from `Actor` 2021-10-27 16:06:44 -04:00
Tyler Goodlet d4699b987b Move asyncio guest mode entrypoint to `to_asyncio`
The function is useful if you want to run the "main process" under
`asyncio`. Until `trio` core wraps this better we'll keep our own copy
in the interim (there's a new "inside-out-guest" mode almost on
mainline so hang tight).
2021-10-27 16:06:44 -04:00
Tyler Goodlet 60739ce571 Propagate any spawned `asyncio` task error upwards
This should mostly maintain top level SC principles for any task spawned
using `tractor.to_asyncio.run()`. When the `asyncio` task completes make
sure to cancel the pertaining `trio` cancel scope and raise any error
that may have resulted.

Resolves #120
2021-10-27 16:06:44 -04:00
Tyler Goodlet 417cebe827 Add a @pub kwarg to allow specifying a "startup response message" 2021-10-27 16:06:44 -04:00
Tyler Goodlet 490a046f6b Support sync code breakpointing via built-in
Override `breakpoint()` for sync code making it work
properly with `trio` as per:

https://github.com/python-trio/trio/issues/1155#issuecomment-742964018

Relates to #193
2021-10-27 16:06:44 -04:00
Tyler Goodlet 8852dde810 Add `maybe_open_context()` an actor wide task-resource cache 2021-10-27 14:06:27 -04:00
13 changed files with 508 additions and 29 deletions

View File

@ -423,6 +423,7 @@ channel`_!
.. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich .. _async sandwich: https://trio.readthedocs.io/en/latest/tutorial.html#async-sandwich
.. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228 .. _structured concurrent: https://trio.discourse.group/t/concise-definition-of-structured-concurrency/228
.. _3 axioms: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=162s .. _3 axioms: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=162s
.. .. _3 axioms: https://en.wikipedia.org/wiki/Actor_model#Fundamental_concepts
.. _adherance to: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=1821s .. _adherance to: https://www.youtube.com/watch?v=7erJ1DV_Tlo&t=1821s
.. _trio gitter channel: https://gitter.im/python-trio/general .. _trio gitter channel: https://gitter.im/python-trio/general
.. _matrix channel: https://matrix.to/#/!tractor:matrix.org .. _matrix channel: https://matrix.to/#/!tractor:matrix.org
@ -431,7 +432,7 @@ channel`_!
.. _messages: https://en.wikipedia.org/wiki/Message_passing .. _messages: https://en.wikipedia.org/wiki/Message_passing
.. _trio docs: https://trio.readthedocs.io/en/latest/ .. _trio docs: https://trio.readthedocs.io/en/latest/
.. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ .. _blog post: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
.. _structured concurrency: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ .. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
.. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony .. _unrequirements: https://en.wikipedia.org/wiki/Actor_model#Direct_communication_and_asynchrony
.. _async generators: https://www.python.org/dev/peps/pep-0525/ .. _async generators: https://www.python.org/dev/peps/pep-0525/
.. _trio-parallel: https://github.com/richardsheridan/trio-parallel .. _trio-parallel: https://github.com/richardsheridan/trio-parallel

View File

@ -0,0 +1,24 @@
import asyncio
import pytest
import tractor
async def sleep_and_err():
await asyncio.sleep(0.1)
assert 0
async def asyncio_actor():
assert tractor.current_actor().is_infected_aio()
await tractor.to_asyncio.run_task(sleep_and_err)
def test_infected_simple_error(arb_addr):
async def main():
async with tractor.open_nursery() as n:
await n.run_in_actor(asyncio_actor, infected_asyncio=True)
with pytest.raises(tractor.RemoteActorError) as excinfo:
tractor.run(main, arbiter_addr=arb_addr)

View File

@ -282,6 +282,9 @@ class Actor:
_parent_main_data: Dict[str, str] _parent_main_data: Dict[str, str]
_parent_chan_cs: Optional[trio.CancelScope] = None _parent_chan_cs: Optional[trio.CancelScope] = None
# if started on ``asycio`` running ``trio`` in guest mode
_infected_aio: bool = False
def __init__( def __init__(
self, self,
name: str, name: str,
@ -1248,6 +1251,9 @@ class Actor:
log.runtime(f"Handshake with actor {uid}@{chan.raddr} complete") log.runtime(f"Handshake with actor {uid}@{chan.raddr} complete")
return uid return uid
def is_infected_aio(self) -> bool:
return self._infected_aio
class Arbiter(Actor): class Arbiter(Actor):
"""A special actor who knows all the other actors and always has """A special actor who knows all the other actors and always has

View File

@ -19,12 +19,15 @@ def parse_ipaddr(arg):
return (str(host), int(port)) return (str(host), int(port))
from ._entry import _trio_main
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("--uid", type=parse_uid) parser.add_argument("--uid", type=parse_uid)
parser.add_argument("--loglevel", type=str) parser.add_argument("--loglevel", type=str)
parser.add_argument("--parent_addr", type=parse_ipaddr) parser.add_argument("--parent_addr", type=parse_ipaddr)
parser.add_argument("--asyncio", action='store_true')
args = parser.parse_args() args = parser.parse_args()
subactor = Actor( subactor = Actor(
@ -36,5 +39,6 @@ if __name__ == "__main__":
_trio_main( _trio_main(
subactor, subactor,
parent_addr=args.parent_addr parent_addr=args.parent_addr,
infect_asyncio=args.asyncio,
) )

View File

@ -2,6 +2,7 @@
Multi-core debugging for da peeps! Multi-core debugging for da peeps!
""" """
from __future__ import annotations
import bdb import bdb
import sys import sys
from functools import partial from functools import partial
@ -27,12 +28,14 @@ from ._exceptions import is_multi_cancelled
try: try:
# wtf: only exported when installed in dev mode? # wtf: only exported when installed in dev mode?
import pdbpp import pdbpp
except ImportError: except ImportError:
# pdbpp is installed in regular mode...it monkey patches stuff # pdbpp is installed in regular mode...it monkey patches stuff
import pdb import pdb
assert pdb.xpm, "pdbpp is not installed?" # type: ignore assert pdb.xpm, "pdbpp is not installed?" # type: ignore
pdbpp = pdb pdbpp = pdb
log = get_logger(__name__) log = get_logger(__name__)
@ -92,6 +95,23 @@ class PdbwTeardown(pdbpp.Pdb):
_pdb_release_hook() _pdb_release_hook()
def _mk_pdb() -> PdbwTeardown:
# XXX: setting these flags on the pdb instance are absolutely
# critical to having ctrl-c work in the ``trio`` standard way! The
# stdlib's pdb supports entering the current sync frame on a SIGINT,
# with ``trio`` we pretty much never want this and if we did we can
# handle it in the ``tractor`` task runtime.
# global pdb
pdb = PdbwTeardown()
pdb.nosigint = True
pdb.allow_kbdint = True
opts = (allow_kbdint, nosigint) = pdb.allow_kbdint, pdb.nosigint
print(f'`pdbp` was configured with {opts}')
return pdb
# TODO: will be needed whenever we get to true remote debugging. # TODO: will be needed whenever we get to true remote debugging.
# XXX see https://github.com/goodboy/tractor/issues/130 # XXX see https://github.com/goodboy/tractor/issues/130
@ -461,21 +481,6 @@ async def _breakpoint(
debug_func(actor) debug_func(actor)
def _mk_pdb() -> PdbwTeardown:
# XXX: setting these flags on the pdb instance are absolutely
# critical to having ctrl-c work in the ``trio`` standard way! The
# stdlib's pdb supports entering the current sync frame on a SIGINT,
# with ``trio`` we pretty much never want this and if we did we can
# handle it in the ``tractor`` task runtime.
pdb = PdbwTeardown()
pdb.allow_kbdint = True
pdb.nosigint = True
return pdb
def _set_trace(actor=None): def _set_trace(actor=None):
pdb = _mk_pdb() pdb = _mk_pdb()

View File

@ -9,6 +9,7 @@ import trio # type: ignore
from .log import get_console_log, get_logger from .log import get_console_log, get_logger
from . import _state from . import _state
from .to_asyncio import run_as_asyncio_guest
log = get_logger(__name__) log = get_logger(__name__)
@ -20,6 +21,7 @@ def _mp_main(
forkserver_info: Tuple[Any, Any, Any, Any, Any], forkserver_info: Tuple[Any, Any, Any, Any, Any],
start_method: str, start_method: str,
parent_addr: Tuple[str, int] = None, parent_addr: Tuple[str, int] = None,
infect_asyncio: bool = False,
) -> None: ) -> None:
"""The routine called *after fork* which invokes a fresh ``trio.run`` """The routine called *after fork* which invokes a fresh ``trio.run``
""" """
@ -45,6 +47,10 @@ def _mp_main(
parent_addr=parent_addr parent_addr=parent_addr
) )
try: try:
if infect_asyncio:
actor._infected_aio = True
run_as_asyncio_guest(trio_main)
else:
trio.run(trio_main) trio.run(trio_main)
except KeyboardInterrupt: except KeyboardInterrupt:
pass # handle it the same way trio does? pass # handle it the same way trio does?
@ -57,6 +63,7 @@ def _trio_main(
actor: 'Actor', # type: ignore actor: 'Actor', # type: ignore
*, *,
parent_addr: Tuple[str, int] = None, parent_addr: Tuple[str, int] = None,
infect_asyncio: bool = False,
) -> None: ) -> None:
"""Entry point for a `trio_run_in_process` subactor. """Entry point for a `trio_run_in_process` subactor.
""" """
@ -66,6 +73,8 @@ def _trio_main(
log.info(f"Started new trio process for {actor.uid}") log.info(f"Started new trio process for {actor.uid}")
log.info(f"Started new trio process for {actor.uid}")
if actor.loglevel is not None: if actor.loglevel is not None:
log.info( log.info(
f"Setting loglevel for {actor.uid} to {actor.loglevel}") f"Setting loglevel for {actor.uid} to {actor.loglevel}")
@ -83,6 +92,10 @@ def _trio_main(
) )
try: try:
if infect_asyncio:
actor._infected_aio = True
run_as_asyncio_guest(trio_main)
else:
trio.run(trio_main) trio.run(trio_main)
except KeyboardInterrupt: except KeyboardInterrupt:
log.warning(f"Actor {actor.uid} received KBI") log.warning(f"Actor {actor.uid} received KBI")

View File

@ -238,7 +238,7 @@ def run(
def run_daemon( def run_daemon(
rpc_module_paths: List[str], enable_modules: List[str],
**kwargs **kwargs
) -> None: ) -> None:
"""Spawn daemon actor which will respond to RPC. """Spawn daemon actor which will respond to RPC.
@ -247,9 +247,9 @@ def run_daemon(
``tractor.run(trio.sleep(float('inf')))`` such that the first actor spawned ``tractor.run(trio.sleep(float('inf')))`` such that the first actor spawned
is meant to run forever responding to RPC requests. is meant to run forever responding to RPC requests.
""" """
kwargs['rpc_module_paths'] = list(rpc_module_paths) kwargs['enable_modules'] = list(enable_modules)
for path in rpc_module_paths: for path in enable_modules:
importlib.import_module(path) importlib.import_module(path)
return run(partial(trio.sleep, float('inf')), **kwargs) return run(partial(trio.sleep, float('inf')), **kwargs)

View File

@ -192,6 +192,8 @@ async def new_proc(
_runtime_vars: Dict[str, Any], # serialized and sent to _child _runtime_vars: Dict[str, Any], # serialized and sent to _child
*, *,
infect_asyncio: bool = False,
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED
) -> None: ) -> None:
@ -205,7 +207,6 @@ async def new_proc(
uid = subactor.uid uid = subactor.uid
if _spawn_method == 'trio': if _spawn_method == 'trio':
spawn_cmd = [ spawn_cmd = [
sys.executable, sys.executable,
"-m", "-m",
@ -228,6 +229,9 @@ async def new_proc(
"--loglevel", "--loglevel",
subactor.loglevel subactor.loglevel
] ]
# Tell child to run in guest mode on top of ``asyncio`` loop
if infect_asyncio:
spawn_cmd.append("--asyncio")
cancelled_during_spawn: bool = False cancelled_during_spawn: bool = False
try: try:
@ -345,6 +349,7 @@ async def new_proc(
bind_addr=bind_addr, bind_addr=bind_addr,
parent_addr=parent_addr, parent_addr=parent_addr,
_runtime_vars=_runtime_vars, _runtime_vars=_runtime_vars,
infect_asyncio=infect_asyncio,
task_status=task_status, task_status=task_status,
) )
@ -360,6 +365,7 @@ async def mp_new_proc(
parent_addr: Tuple[str, int], parent_addr: Tuple[str, int],
_runtime_vars: Dict[str, Any], # serialized and sent to _child _runtime_vars: Dict[str, Any], # serialized and sent to _child
*, *,
infect_asyncio: bool = False,
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED
) -> None: ) -> None:

View File

@ -62,6 +62,8 @@ class ActorNursery:
enable_modules: List[str] = None, enable_modules: List[str] = None,
loglevel: str = None, # set log level per subactor loglevel: str = None, # set log level per subactor
nursery: trio.Nursery = None, nursery: trio.Nursery = None,
infect_asyncio: bool = False,
debug_mode: Optional[bool] = None,
) -> Portal: ) -> Portal:
loglevel = loglevel or self._actor.loglevel or get_loglevel() loglevel = loglevel or self._actor.loglevel or get_loglevel()
@ -69,6 +71,10 @@ class ActorNursery:
_rtv = _state._runtime_vars.copy() _rtv = _state._runtime_vars.copy()
_rtv['_is_root'] = False _rtv['_is_root'] = False
# allow setting debug policy per actor
if debug_mode is not None:
_rtv['_debug_mode'] = debug_mode
enable_modules = enable_modules or [] enable_modules = enable_modules or []
if rpc_module_paths: if rpc_module_paths:
@ -105,6 +111,7 @@ class ActorNursery:
bind_addr, bind_addr,
parent_addr, parent_addr,
_rtv, # run time vars _rtv, # run time vars
infect_asyncio=infect_asyncio,
) )
) )
@ -117,6 +124,7 @@ class ActorNursery:
rpc_module_paths: Optional[List[str]] = None, rpc_module_paths: Optional[List[str]] = None,
enable_modules: List[str] = None, enable_modules: List[str] = None,
loglevel: str = None, # set log level per subactor loglevel: str = None, # set log level per subactor
infect_asyncio: bool = False,
**kwargs, # explicit args to ``fn`` **kwargs, # explicit args to ``fn``
) -> Portal: ) -> Portal:
"""Spawn a new actor, run a lone task, then terminate the actor and """Spawn a new actor, run a lone task, then terminate the actor and
@ -141,6 +149,7 @@ class ActorNursery:
loglevel=loglevel, loglevel=loglevel,
# use the run_in_actor nursery # use the run_in_actor nursery
nursery=self._ria_nursery, nursery=self._ria_nursery,
infect_asyncio=infect_asyncio,
) )
# XXX: don't allow stream funcs # XXX: don't allow stream funcs

View File

@ -121,6 +121,7 @@ def pub(
wrapped: typing.Callable = None, wrapped: typing.Callable = None,
*, *,
tasks: Set[str] = set(), tasks: Set[str] = set(),
send_on_connect: Any = None,
): ):
"""Publisher async generator decorator. """Publisher async generator decorator.
@ -206,7 +207,7 @@ def pub(
# handle the decorator not called with () case # handle the decorator not called with () case
if wrapped is None: if wrapped is None:
return partial(pub, tasks=tasks) return partial(pub, tasks=tasks, send_on_connect=send_on_connect)
task2lock: Dict[str, trio.StrictFIFOLock] = {} task2lock: Dict[str, trio.StrictFIFOLock] = {}
@ -249,6 +250,11 @@ def pub(
try: try:
modify_subs(topics2ctxs, topics, ctx) modify_subs(topics2ctxs, topics, ctx)
# if specified send the startup message back to consumer
if send_on_connect is not None:
await ctx.send_yield(send_on_connect)
# block and let existing feed task deliver # block and let existing feed task deliver
# stream data until it is cancelled in which case # stream data until it is cancelled in which case
# the next waiting task will take over and spawn it again # the next waiting task will take over and spawn it again

View File

@ -0,0 +1,276 @@
'''
Infection apis for ``asyncio`` loops running ``trio`` using guest mode.
'''
import asyncio
from contextlib import asynccontextmanager as acm
import inspect
from typing import (
Any,
Callable,
AsyncIterator,
Awaitable,
Optional,
)
import trio
from .log import get_logger, get_console_log
from ._state import current_actor
log = get_logger(__name__)
__all__ = ['run_task', 'run_as_asyncio_guest']
def _run_asyncio_task(
func: Callable,
*,
qsize: int = 1,
provide_channels: bool = False,
**kwargs,
) -> Any:
'''
Run an ``asyncio`` async function or generator in a task, return
or stream the result back to ``trio``.
'''
if not current_actor().is_infected_aio():
raise RuntimeError("`infect_asyncio` mode is not enabled!?")
# ITC (inter task comms)
from_trio = asyncio.Queue(qsize) # type: ignore
to_trio, from_aio = trio.open_memory_channel(qsize) # type: ignore
from_aio._err = None
args = tuple(inspect.getfullargspec(func).args)
if getattr(func, '_tractor_steam_function', None):
# the assumption is that the target async routine accepts the
# send channel then it intends to yield more then one return
# value otherwise it would just return ;P
assert qsize > 1
if provide_channels:
assert 'to_trio' in args
# allow target func to accept/stream results manually by name
if 'to_trio' in args:
kwargs['to_trio'] = to_trio
if 'from_trio' in args:
kwargs['from_trio'] = from_trio
coro = func(**kwargs)
cancel_scope = trio.CancelScope()
aio_task_complete = trio.Event()
aio_err: Optional[BaseException] = None
async def wait_on_coro_final_result(
to_trio: trio.MemorySendChannel,
coro: Awaitable,
aio_task_complete: trio.Event,
) -> None:
'''
Await ``coro`` and relay result back to ``trio``.
'''
nonlocal aio_err
orig = result = id(coro)
try:
result = await coro
except BaseException as err:
aio_err = err
from_aio._err = aio_err
raise
finally:
aio_task_complete.set()
if result != orig and aio_err is None:
to_trio.send_nowait(result)
# start the asyncio task we submitted from trio
if inspect.isawaitable(coro):
task = asyncio.create_task(
wait_on_coro_final_result(to_trio, coro, aio_task_complete)
)
else:
raise TypeError(f"No support for invoking {coro}")
def cancel_trio(task) -> None:
'''
Cancel the calling ``trio`` task on error.
'''
nonlocal aio_err
try:
aio_err = task.exception()
except asyncio.CancelledError as cerr:
aio_err = cerr
if aio_err:
log.exception(f"asyncio task errorred:\n{aio_err}")
from_aio._err = aio_err
cancel_scope.cancel()
from_aio.close()
task.add_done_callback(cancel_trio)
return task, from_aio, to_trio, cancel_scope, aio_task_complete
async def run_task(
func: Callable,
*,
qsize: int = 2**10,
**kwargs,
) -> Any:
"""Run an ``asyncio`` async function or generator in a task, return
or stream the result back to ``trio``.
"""
# simple async func
try:
task, from_aio, to_trio, cs, _ = _run_asyncio_task(
func,
qsize=1,
**kwargs,
)
# return single value
with cs:
# naively expect the mem chan api to do the job
# of handling cross-framework cancellations / errors
return await from_aio.receive()
if cs.cancelled_caught:
# always raise from any captured asyncio error
if from_aio._err:
raise from_aio._err
# Do we need this?
except BaseException as err:
aio_err = from_aio._err
if aio_err is not None:
# always raise from any captured asyncio error
raise err from aio_err
else:
raise
finally:
if not task.done():
task.cancel()
# TODO: explicitly api for the streaming case where
# we pull from the mem chan in an async generator?
# This ends up looking more like our ``Portal.open_stream_from()``
# NB: code below is untested.
@acm
async def open_channel_from(
target: Callable[[Any, ...], Any],
**kwargs,
) -> AsyncIterator[Any]:
try:
task, from_aio, to_trio, cs, aio_task_complete = _run_asyncio_task(
target,
qsize=2**8,
provide_channels=True,
**kwargs,
)
with cs:
# sync to "started()" call.
first = await from_aio.receive()
# stream values upward
async with from_aio:
yield first, from_aio
# await aio_task_complete.wait()
except BaseException as err:
aio_err = from_aio._err
if aio_err is not None:
# always raise from any captured asyncio error
raise err from aio_err
else:
raise
finally:
if cs.cancelled_caught:
# always raise from any captured asyncio error
if from_aio._err:
raise from_aio._err
if not task.done():
task.cancel()
def run_as_asyncio_guest(
trio_main: Callable,
) -> None:
'''
Entry for an "infected ``asyncio`` actor".
Uh, oh. :o
It looks like your event loop has caught a case of the ``trio``s.
:()
Don't worry, we've heard you'll barely notice. You might hallucinate
a few more propagating errors and feel like your digestion has
slowed but if anything get's too bad your parents will know about
it.
:)
'''
# Disable sigint handling in children? (nawp)
# import signal
# signal.signal(signal.SIGINT, signal.SIG_IGN)
get_console_log('runtime')
async def aio_main(trio_main):
loop = asyncio.get_running_loop()
trio_done_fut = asyncio.Future()
def trio_done_callback(main_outcome):
print(f"trio_main finished: {main_outcome!r}")
trio_done_fut.set_result(main_outcome)
# start the infection: run trio on the asyncio loop in "guest mode"
log.info(f"Infecting asyncio process with {trio_main}")
trio.lowlevel.start_guest_run(
trio_main,
run_sync_soon_threadsafe=loop.call_soon_threadsafe,
done_callback=trio_done_callback,
)
(await trio_done_fut).unwrap()
# might as well if it's installed.
try:
import uvloop
loop = uvloop.new_event_loop()
asyncio.set_event_loop(loop)
except ImportError:
pass
asyncio.run(aio_main(trio_main))

View File

@ -2,8 +2,15 @@
Sugary patterns for trio + tractor designs. Sugary patterns for trio + tractor designs.
''' '''
from ._mngrs import gather_contexts from ._mngrs import (
from ._broadcast import broadcast_receiver, BroadcastReceiver, Lagged gather_contexts,
maybe_open_context,
)
from ._broadcast import (
broadcast_receiver,
BroadcastReceiver,
Lagged,
)
__all__ = [ __all__ = [
@ -11,4 +18,5 @@ __all__ = [
'broadcast_receiver', 'broadcast_receiver',
'BroadcastReceiver', 'BroadcastReceiver',
'Lagged', 'Lagged',
'maybe_open_context',
] ]

View File

@ -2,12 +2,25 @@
Async context manager primitives with hard ``trio``-aware semantics Async context manager primitives with hard ``trio``-aware semantics
''' '''
from typing import AsyncContextManager, AsyncGenerator
from typing import TypeVar, Sequence
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from typing import (
Any,
AsyncContextManager,
AsyncGenerator,
Hashable,
Optional,
Sequence,
TypeVar,
)
import trio import trio
from trio_typing import TaskStatus
from ..log import get_logger
from .._state import current_actor
log = get_logger(__name__)
# A regular invariant generic type # A regular invariant generic type
T = TypeVar("T") T = TypeVar("T")
@ -76,3 +89,111 @@ async def gather_contexts(
# we don't need a try/finally since cancellation will be triggered # we don't need a try/finally since cancellation will be triggered
# by the surrounding nursery on error. # by the surrounding nursery on error.
parent_exit.set() parent_exit.set()
# Per actor task caching helpers.
# Further potential examples of interest:
# https://gist.github.com/njsmith/cf6fc0a97f53865f2c671659c88c1798#file-cache-py-L8
class cache:
'''
Globally (processs wide) cached, task access to a
kept-alive-while-in-use async resource.
'''
lock = trio.Lock()
users: int = 0
values: dict[Any, Any] = {}
resources: dict[
int,
Optional[tuple[trio.Nursery, trio.Event]]
] = {}
no_more_users: Optional[trio.Event] = None
@classmethod
async def run_ctx(
cls,
mng,
key,
task_status: TaskStatus[T] = trio.TASK_STATUS_IGNORED,
) -> None:
async with mng as value:
_, no_more_users = cls.resources[id(mng)]
cls.values[key] = value
task_status.started(value)
try:
await no_more_users.wait()
finally:
value = cls.values.pop(key)
# discard nursery ref so it won't be re-used (an error)
cls.resources.pop(id(mng))
@acm
async def maybe_open_context(
key: Hashable,
mngr: AsyncContextManager[T],
) -> (bool, T):
'''
Maybe open a context manager if there is not already a cached
version for the provided ``key``. Return the cached instance on
a cache hit.
'''
await cache.lock.acquire()
ctx_key = id(mngr)
value = None
try:
# lock feed acquisition around task racing / ``trio``'s
# scheduler protocol
value = cache.values[key]
log.info(f'Reusing cached resource for {key}')
cache.users += 1
cache.lock.release()
yield True, value
except KeyError:
log.info(f'Allocating new resource for {key}')
# **critical section** that should prevent other tasks from
# checking the cache until complete otherwise the scheduler
# may switch and by accident we create more then one feed.
# TODO: avoid pulling from ``tractor`` internals and
# instead offer a "root nursery" in piker actors?
service_n = current_actor()._service_n
# TODO: does this need to be a tractor "root nursery"?
ln = cache.resources.get(ctx_key)
assert not ln
ln, _ = cache.resources[ctx_key] = (service_n, trio.Event())
value = await ln.start(cache.run_ctx, mngr, key)
cache.users += 1
cache.lock.release()
yield False, value
finally:
cache.users -= 1
if cache.lock.locked():
cache.lock.release()
if value is not None:
# if no more consumers, teardown the client
if cache.users <= 0:
log.info(f'De-allocating resource for {key}')
# terminate mngr nursery
entry = cache.resources.get(ctx_key)
if entry:
_, no_more_users = entry
no_more_users.set()