Compare commits

..

14 Commits

Author SHA1 Message Date
Tyler Goodlet 8bcf1ea8c2 Bump "task-manager(-nursery)" naming, add logging
Namely just renaming any `trio.Nursery` instances to `tn`, the primary
`@acm`-API to `.trionics.open_taskman()` and change out all `print()`s
for logger instances with 'info' level enabled by the mod-script usage.
2026-03-17 23:45:01 -04:00
Tyler Goodlet f50f0b6e76 Add a new `.trionics._tn` for "task nursery stuff"
Since I'd like to decouple the new "task-manager-nursery" lowlevel
primitives/abstractions from the higher-level
`TaskManagerNursery`-supporting API(s) and default per-task
supervision-strat and because `._mngr` is already purposed for
higher-level "on-top-of-nursery" patterns as it is.

Deats,
- move `maybe_open_nursery()` into the new mod.
- adjust the pkg-mod's import to the new sub-mod.
- also draft up this idea for an API which stacks `._beg.collapse_eg()`
  onto a nursery with the WIP name `open_loose_tn()` but more then
  likely i'll just discard this idea bc i think the explicit `@acm`
  stacking is more explicit/pythonic/up-front-grokable despite the extra
  LoC.
2026-03-17 23:45:01 -04:00
Tyler Goodlet 0966a36b29 Add `debug_mode: bool` control to task mngr
Allows dynamically importing `pdbp` when enabled and a way for
eventually linking with `tractor`'s own debug mode flag.
2026-03-17 23:45:01 -04:00
Tyler Goodlet 8e65d06eaf Go all in on "task manager" naming 2026-03-17 23:45:01 -04:00
Tyler Goodlet 975692b575 More refinements and proper typing
- drop unneeded (and commented) internal cs allocating bits.
- bypass all task manager stuff if no generator is provided by the
  caller; i.e. just call `.start_soon()` as normal.
- fix `Generator` typing.
- add some prints around task manager.
- wrap in `TaskOutcome.lowlevel_task: Task`.
2026-03-17 23:45:01 -04:00
Tyler Goodlet 22efb10d84 Ensure user-allocated cancel scope just works!
Turns out the nursery doesn't have to care about allocating a per task
`CancelScope` since the user can just do that in the
`@task_scope_manager` if desired B) So just mask all the nursery cs
allocating with the intention of removal.

Also add a test for per-task-cancellation by starting the crash task as
a `trio.sleep_forever()` but then cancel it via the user allocated cs
and ensure the crash propagates as expected 💥
2026-03-17 23:45:01 -04:00
Tyler Goodlet 759fabac9c Facepalm, don't pass in unecessary cancel scope 2026-03-17 23:45:01 -04:00
Tyler Goodlet 16ddbcb4db Do renaming, implement lowlevel `Outcome` sending
As was listed in the many todos, this changes the `.start_soon()` impl
to instead (manually) `.send()` into the user defined
`@task_scope_manager` an `Outcome` from the spawned task. In this case
the task manager wraps that in a user defined (and renamed)
`TaskOutcome` and delivers that + a containing `trio.CancelScope` to the
`.start_soon()` caller. Here the user defined `TaskOutcome` defines
a `.wait_for_result()` method that can be used to await the task's exit
and handle it's underlying returned value or raised error; the
implementation could be different and subject to the user's own whims.

Note that by default, if this was added to `trio`'s core, the
`@task_scope_manager` would simply be implemented as either a `None`
yielding single-yield-generator but more likely just entirely ignored
by the runtime (as in no manual task outcome collecting, generator
calling and sending is done at all) by default if the user does not provide
the `task_scope_manager` to the nursery at open time.
2026-03-17 23:45:01 -04:00
Tyler Goodlet 5860e02efd Alias to `@acm` in broadcaster mod 2026-03-17 23:45:01 -04:00
Tyler Goodlet cb0d1a87f5 Initial prototype for a one-cancels-one style supervisor, nursery thing.. 2026-03-17 23:45:01 -04:00
Tyler Goodlet c833ee69cb Use shorthand nursery var-names per convention in codebase 2026-03-17 23:45:01 -04:00
Tyler Goodlet 5732ee7af1 Better separate service tasks vs. ctxs via methods
Namely splitting the handles for each in 2 separate tables and adding
a `.cancel_service_task()`.

Also,
- move `_open_and_supervise_service_ctx()` to mod level.
- rename `target` -> `ctx_fn` params througout.
- fill out method doc strings.
2026-03-17 23:45:01 -04:00
Tyler Goodlet 7e96c6413b Mv over `ServiceMngr` from `piker` with mods
Namely distinguishing service "IPC contexts" (opened in a
subactor via a `Portal`) from just local `trio.Task`s started
and managed under the `.service_n` (more or less wrapping in the
interface of a "task-manager" style nursery - aka a one-cancels-one
supervision start).

API changes from original (`piker`) impl,
- mk `.start_service_task()` do ONLY that, start a task with a wrapping
  cancel-scope and completion event.
  |_ ideally this gets factored-out/re-implemented using the
    task-manager/OCO-style-nursery from GH #363.
- change what was the impl of `.start_service_task()` to `.start_service_ctx()`
  since it more explicitly defines the functionality of entering
  `Portal.open_context()` with a wrapping cs and completion event inside
  a bg task (which syncs the ctx's lifetime with termination of the
  remote actor runtime).
- factor out what was a `.start_service_ctx()` closure to a new
  `_open_and_supervise_service_ctx()` mod-func holding the meat of
  the supervision logic.

`ServiceMngr` API brief,
- use `open_service_mngr()` and `get_service_mngr()` to acquire the
  actor-global singleton.
- `ServiceMngr.start_service()` and `.cancel_service()` which allow for
  straight forward mgmt of "service subactor daemons".
2026-03-17 23:45:01 -04:00
Tyler Goodlet 2a80059129 Initial idea-notes dump and @singleton factory idea from `trio`-gitter 2026-03-17 23:45:01 -04:00
62 changed files with 477 additions and 1457 deletions

View File

@ -20,7 +20,7 @@ async def sleep(
async def open_ctx( async def open_ctx(
n: tractor.runtime._supervise.ActorNursery n: tractor._supervise.ActorNursery
): ):
# spawn both actors # spawn both actors

View File

@ -10,7 +10,7 @@ async def main(service_name):
await an.start_actor(service_name) await an.start_actor(service_name)
async with tractor.get_registry() as portal: async with tractor.get_registry() as portal:
print(f"Registrar is listening on {portal.channel}") print(f"Arbiter is listening on {portal.channel}")
async with tractor.wait_for_actor(service_name) as sockaddr: async with tractor.wait_for_actor(service_name) as sockaddr:
print(f"my_service is found at {sockaddr}") print(f"my_service is found at {sockaddr}")

View File

@ -9,8 +9,6 @@ import os
import signal import signal
import platform import platform
import time import time
from pathlib import Path
from typing import Literal
import pytest import pytest
import tractor import tractor
@ -54,76 +52,6 @@ no_macos = pytest.mark.skipif(
) )
def get_cpu_state(
icpu: int = 0,
setting: Literal[
'scaling_governor',
'*_pstate_max_freq',
'scaling_max_freq',
# 'scaling_cur_freq',
] = '*_pstate_max_freq',
) -> tuple[
Path,
str|int,
]|None:
'''
Attempt to read the (first) CPU's setting according
to the set `setting` from under the file-sys,
/sys/devices/system/cpu/cpu0/cpufreq/{setting}
Useful to determine latency headroom for various perf affected
test suites.
'''
try:
# Read governor for core 0 (usually same for all)
setting_path: Path = list(
Path(f'/sys/devices/system/cpu/cpu{icpu}/cpufreq/')
.glob(f'{setting}')
)[0] # <- XXX must be single match!
with open(
setting_path,
'r',
) as f:
return (
setting_path,
f.read().strip(),
)
except (FileNotFoundError, IndexError):
return None
def cpu_scaling_factor() -> float:
'''
Return a latency-headroom multiplier (>= 1.0) reflecting how
much to inflate time-limits when CPU-freq scaling is active on
linux.
When no scaling info is available (non-linux, missing sysfs),
returns 1.0 (i.e. no headroom adjustment needed).
'''
if _non_linux:
return 1.
mx = get_cpu_state()
cur = get_cpu_state(setting='scaling_max_freq')
if mx is None or cur is None:
return 1.
_mx_pth, max_freq = mx
_cur_pth, cur_freq = cur
cpu_scaled: float = int(cur_freq) / int(max_freq)
if cpu_scaled != 1.:
return 1. / (
cpu_scaled * 2 # <- bc likely "dual threaded"
)
return 1.
def pytest_addoption( def pytest_addoption(
parser: pytest.Parser, parser: pytest.Parser,
): ):

View File

@ -126,7 +126,7 @@ def test_shield_pause(
child.pid, child.pid,
signal.SIGINT, signal.SIGINT,
) )
from tractor.runtime._supervise import _shutdown_msg from tractor._supervise import _shutdown_msg
expect( expect(
child, child,
# 'Shutting down actor runtime', # 'Shutting down actor runtime',

View File

@ -8,16 +8,17 @@ from pathlib import Path
import pytest import pytest
import trio import trio
import tractor import tractor
from tractor import Actor from tractor import (
from tractor.runtime import _state Actor,
from tractor.discovery import _addr _state,
_addr,
)
@pytest.fixture @pytest.fixture
def bindspace_dir_str() -> str: def bindspace_dir_str() -> str:
from tractor.runtime._state import get_rt_dir rt_dir: Path = tractor._state.get_rt_dir()
rt_dir: Path = get_rt_dir()
bs_dir: Path = rt_dir / 'doggy' bs_dir: Path = rt_dir / 'doggy'
bs_dir_str: str = str(bs_dir) bs_dir_str: str = str(bs_dir)
assert not bs_dir.is_dir() assert not bs_dir.is_dir()

View File

@ -13,9 +13,9 @@ from tractor import (
Portal, Portal,
ipc, ipc,
msg, msg,
_state,
_addr,
) )
from tractor.runtime import _state
from tractor.discovery import _addr
@tractor.context @tractor.context
async def chk_tpts( async def chk_tpts(

View File

@ -61,7 +61,7 @@ async def maybe_expect_raises(
Async wrapper for ensuring errors propagate from the inner scope. Async wrapper for ensuring errors propagate from the inner scope.
''' '''
if tractor.debug_mode(): if tractor._state.debug_mode():
timeout += 999 timeout += 999
with trio.fail_after(timeout): with trio.fail_after(timeout):

View File

@ -490,7 +490,7 @@ def test_cancel_via_SIGINT(
"""Ensure that a control-C (SIGINT) signal cancels both the parent and """Ensure that a control-C (SIGINT) signal cancels both the parent and
child processes in trionic fashion child processes in trionic fashion
""" """
pid: int = os.getpid() pid = os.getpid()
async def main(): async def main():
with trio.fail_after(2): with trio.fail_after(2):
@ -517,8 +517,6 @@ def test_cancel_via_SIGINT_other_task(
started from a seperate ``trio`` child task. started from a seperate ``trio`` child task.
''' '''
from .conftest import cpu_scaling_factor
pid: int = os.getpid() pid: int = os.getpid()
timeout: float = ( timeout: float = (
4 if _non_linux 4 if _non_linux
@ -527,11 +525,6 @@ def test_cancel_via_SIGINT_other_task(
if _friggin_windows: # smh if _friggin_windows: # smh
timeout += 1 timeout += 1
# add latency headroom for CPU freq scaling (auto-cpufreq et al.)
headroom: float = cpu_scaling_factor()
if headroom != 1.:
timeout *= headroom
async def spawn_and_sleep_forever( async def spawn_and_sleep_forever(
task_status=trio.TASK_STATUS_IGNORED task_status=trio.TASK_STATUS_IGNORED
): ):

View File

@ -10,20 +10,7 @@ from tractor._testing import tractor_test
MESSAGE = 'tractoring at full speed' MESSAGE = 'tractoring at full speed'
def test_empty_mngrs_input_raises( def test_empty_mngrs_input_raises() -> None:
tpt_proto: str,
) -> None:
# TODO, the `open_actor_cluster()` teardown hangs
# intermittently on UDS when `gather_contexts(mngrs=())`
# raises `ValueError` mid-setup; likely a race in the
# actor-nursery cleanup vs UDS socket shutdown. Needs
# a deeper look at `._clustering`/`._supervise` teardown
# paths with the UDS transport.
if tpt_proto == 'uds':
pytest.skip(
'actor-cluster teardown hangs intermittently on UDS'
)
async def main(): async def main():
with trio.fail_after(3): with trio.fail_after(3):
async with ( async with (

View File

@ -26,7 +26,7 @@ from tractor._exceptions import (
StreamOverrun, StreamOverrun,
ContextCancelled, ContextCancelled,
) )
from tractor.runtime._state import current_ipc_ctx from tractor._state import current_ipc_ctx
from tractor._testing import ( from tractor._testing import (
tractor_test, tractor_test,
@ -939,7 +939,7 @@ def test_one_end_stream_not_opened(
''' '''
overrunner, buf_size_increase, entrypoint = overrun_by overrunner, buf_size_increase, entrypoint = overrun_by
from tractor.runtime._runtime import Actor from tractor._runtime import Actor
buf_size = buf_size_increase + Actor.msg_buffer_size buf_size = buf_size_increase + Actor.msg_buffer_size
timeout: float = ( timeout: float = (

View File

@ -1,7 +1,7 @@
''' """
Discovery subsystem via a "registrar" actor scenarios. Discovery subsys.
''' """
import os import os
import signal import signal
import platform import platform
@ -24,7 +24,7 @@ async def test_reg_then_unreg(
reg_addr: tuple, reg_addr: tuple,
): ):
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_registrar assert actor.is_arbiter
assert len(actor._registry) == 1 # only self is registered assert len(actor._registry) == 1 # only self is registered
async with tractor.open_nursery( async with tractor.open_nursery(
@ -35,7 +35,7 @@ async def test_reg_then_unreg(
uid = portal.channel.aid.uid uid = portal.channel.aid.uid
async with tractor.get_registry(reg_addr) as aportal: async with tractor.get_registry(reg_addr) as aportal:
# this local actor should be the registrar # this local actor should be the arbiter
assert actor is aportal.actor assert actor is aportal.actor
async with tractor.wait_for_actor('actor'): async with tractor.wait_for_actor('actor'):
@ -154,7 +154,7 @@ async def unpack_reg(
actor_or_portal: tractor.Portal|tractor.Actor, actor_or_portal: tractor.Portal|tractor.Actor,
): ):
''' '''
Get and unpack a "registry" RPC request from the registrar Get and unpack a "registry" RPC request from the "arbiter" registry
system. system.
''' '''
@ -163,10 +163,7 @@ async def unpack_reg(
else: else:
msg = await actor_or_portal.run_from_ns('self', 'get_registry') msg = await actor_or_portal.run_from_ns('self', 'get_registry')
return { return {tuple(key.split('.')): val for key, val in msg.items()}
tuple(key.split('.')): val
for key, val in msg.items()
}
async def spawn_and_check_registry( async def spawn_and_check_registry(
@ -197,15 +194,15 @@ async def spawn_and_check_registry(
actor = tractor.current_actor() actor = tractor.current_actor()
if remote_arbiter: if remote_arbiter:
assert not actor.is_registrar assert not actor.is_arbiter
if actor.is_registrar: if actor.is_arbiter:
extra = 1 # registrar is local root actor extra = 1 # arbiter is local root actor
get_reg = partial(unpack_reg, actor) get_reg = partial(unpack_reg, actor)
else: else:
get_reg = partial(unpack_reg, portal) get_reg = partial(unpack_reg, portal)
extra = 2 # local root actor + remote registrar extra = 2 # local root actor + remote arbiter
# ensure current actor is registered # ensure current actor is registered
registry: dict = await get_reg() registry: dict = await get_reg()
@ -285,7 +282,7 @@ def test_subactors_unregister_on_cancel(
): ):
''' '''
Verify that cancelling a nursery results in all subactors Verify that cancelling a nursery results in all subactors
deregistering themselves with the registrar. deregistering themselves with the arbiter.
''' '''
with pytest.raises(KeyboardInterrupt): with pytest.raises(KeyboardInterrupt):
@ -314,7 +311,7 @@ def test_subactors_unregister_on_cancel_remote_daemon(
''' '''
Verify that cancelling a nursery results in all subactors Verify that cancelling a nursery results in all subactors
deregistering themselves with a **remote** (not in the local deregistering themselves with a **remote** (not in the local
process tree) registrar. process tree) arbiter.
''' '''
with pytest.raises(KeyboardInterrupt): with pytest.raises(KeyboardInterrupt):
@ -359,24 +356,20 @@ async def close_chans_before_nursery(
try: try:
get_reg = partial(unpack_reg, aportal) get_reg = partial(unpack_reg, aportal)
async with tractor.open_nursery() as an: async with tractor.open_nursery() as tn:
portal1 = await an.start_actor( portal1 = await tn.start_actor(
name='consumer1', name='consumer1', enable_modules=[__name__])
enable_modules=[__name__], portal2 = await tn.start_actor(
) 'consumer2', enable_modules=[__name__])
portal2 = await an.start_actor(
'consumer2',
enable_modules=[__name__],
)
async with ( # TODO: compact this back as was in last commit once
portal1.open_stream_from( # 3.9+, see https://github.com/goodboy/tractor/issues/207
async with portal1.open_stream_from(
stream_forever stream_forever
) as agen1, ) as agen1:
portal2.open_stream_from( async with portal2.open_stream_from(
stream_forever stream_forever
) as agen2, ) as agen2:
):
async with ( async with (
collapse_eg(), collapse_eg(),
trio.open_nursery() as tn, trio.open_nursery() as tn,
@ -387,7 +380,7 @@ async def close_chans_before_nursery(
await streamer(agen2) await streamer(agen2)
finally: finally:
# Kill the root nursery thus resulting in # Kill the root nursery thus resulting in
# normal registrar channel ops to fail during # normal arbiter channel ops to fail during
# teardown. It doesn't seem like this is # teardown. It doesn't seem like this is
# reliably triggered by an external SIGINT. # reliably triggered by an external SIGINT.
# tractor.current_actor()._root_nursery.cancel_scope.cancel() # tractor.current_actor()._root_nursery.cancel_scope.cancel()
@ -399,7 +392,6 @@ async def close_chans_before_nursery(
# also kill off channels cuz why not # also kill off channels cuz why not
await agen1.aclose() await agen1.aclose()
await agen2.aclose() await agen2.aclose()
finally: finally:
with trio.CancelScope(shield=True): with trio.CancelScope(shield=True):
await trio.sleep(1) await trio.sleep(1)
@ -420,7 +412,7 @@ def test_close_channel_explicit(
''' '''
Verify that closing a stream explicitly and killing the actor's Verify that closing a stream explicitly and killing the actor's
"root nursery" **before** the containing nursery tears down also "root nursery" **before** the containing nursery tears down also
results in subactor(s) deregistering from the registrar. results in subactor(s) deregistering from the arbiter.
''' '''
with pytest.raises(KeyboardInterrupt): with pytest.raises(KeyboardInterrupt):
@ -435,7 +427,7 @@ def test_close_channel_explicit(
@pytest.mark.parametrize('use_signal', [False, True]) @pytest.mark.parametrize('use_signal', [False, True])
def test_close_channel_explicit_remote_registrar( def test_close_channel_explicit_remote_arbiter(
daemon: subprocess.Popen, daemon: subprocess.Popen,
start_method: str, start_method: str,
use_signal: bool, use_signal: bool,
@ -444,7 +436,7 @@ def test_close_channel_explicit_remote_registrar(
''' '''
Verify that closing a stream explicitly and killing the actor's Verify that closing a stream explicitly and killing the actor's
"root nursery" **before** the containing nursery tears down also "root nursery" **before** the containing nursery tears down also
results in subactor(s) deregistering from the registrar. results in subactor(s) deregistering from the arbiter.
''' '''
with pytest.raises(KeyboardInterrupt): with pytest.raises(KeyboardInterrupt):
@ -456,65 +448,3 @@ def test_close_channel_explicit_remote_registrar(
remote_arbiter=True, remote_arbiter=True,
), ),
) )
@tractor.context
async def kill_transport(
ctx: tractor.Context,
) -> None:
await ctx.started()
actor: tractor.Actor = tractor.current_actor()
actor.ipc_server.cancel()
await trio.sleep_forever()
# @pytest.mark.parametrize('use_signal', [False, True])
def test_stale_entry_is_deleted(
debug_mode: bool,
daemon: subprocess.Popen,
start_method: str,
reg_addr: tuple,
):
'''
Ensure that when a stale entry is detected in the registrar's
table that the `find_actor()` API takes care of deleting the
stale entry and not delivering a bad portal.
'''
async def main():
name: str = 'transport_fails_actor'
_reg_ptl: tractor.Portal
an: tractor.ActorNursery
async with (
tractor.open_nursery(
debug_mode=debug_mode,
registry_addrs=[reg_addr],
) as an,
tractor.get_registry(reg_addr) as _reg_ptl,
):
ptl: tractor.Portal = await an.start_actor(
name,
enable_modules=[__name__],
)
async with ptl.open_context(
kill_transport,
) as (first, ctx):
async with tractor.find_actor(
name,
registry_addrs=[reg_addr],
) as maybe_portal:
# because the transitive
# `._discovery.maybe_open_portal()` call should
# fail and implicitly call `.delete_addr()`
assert maybe_portal is None
registry: dict = await unpack_reg(_reg_ptl)
assert ptl.chan.aid.uid not in registry
# should fail since we knocked out the IPC tpt XD
await ptl.cancel_actor()
await an.cancel()
trio.run(main)

View File

@ -94,10 +94,8 @@ def run_example_in_subproc(
for f in p[2] for f in p[2]
if ( if (
'__' not in f # ignore any pkg-mods '__' not in f
# ignore any `__pycache__` subdir and f[0] != '_'
and '__pycache__' not in str(p[0])
and f[0] != '_' # ignore any WIP "examplel mods"
and 'debugging' not in p[0] and 'debugging' not in p[0]
and 'integration' not in p[0] and 'integration' not in p[0]
and 'advanced_faults' not in p[0] and 'advanced_faults' not in p[0]
@ -145,19 +143,12 @@ def test_example(
'This test does run just fine "in person" however..' 'This test does run just fine "in person" however..'
) )
from .conftest import cpu_scaling_factor
timeout: float = ( timeout: float = (
60 60
if ci_env and _non_linux if ci_env and _non_linux
else 16 else 16
) )
# add latency headroom for CPU freq scaling (auto-cpufreq et al.)
headroom: float = cpu_scaling_factor()
if headroom != 1.:
timeout *= headroom
with open(ex_file, 'r') as ex: with open(ex_file, 'r') as ex:
code = ex.read() code = ex.read()

View File

@ -26,8 +26,8 @@ from tractor import (
to_asyncio, to_asyncio,
RemoteActorError, RemoteActorError,
ContextCancelled, ContextCancelled,
_state,
) )
from tractor.runtime import _state
from tractor.trionics import BroadcastReceiver from tractor.trionics import BroadcastReceiver
from tractor._testing import expect_ctxc from tractor._testing import expect_ctxc

View File

@ -201,7 +201,7 @@ async def stream_from_peer(
) -> None: ) -> None:
# sanity # sanity
assert tractor.debug_mode() == debug_mode assert tractor._state.debug_mode() == debug_mode
peer: Portal peer: Portal
try: try:
@ -841,7 +841,7 @@ async def serve_subactors(
async with open_nursery() as an: async with open_nursery() as an:
# sanity # sanity
assert tractor.debug_mode() == debug_mode assert tractor._state.debug_mode() == debug_mode
await ctx.started(peer_name) await ctx.started(peer_name)
async with ctx.open_stream() as ipc: async with ctx.open_stream() as ipc:
@ -880,7 +880,7 @@ async def client_req_subactor(
) -> None: ) -> None:
# sanity # sanity
if debug_mode: if debug_mode:
assert tractor.debug_mode() assert tractor._state.debug_mode()
# TODO: other cases to do with sub lifetimes: # TODO: other cases to do with sub lifetimes:
# -[ ] test that we can have the server spawn a sub # -[ ] test that we can have the server spawn a sub

View File

@ -300,43 +300,19 @@ def test_a_quadruple_example(
time_quad_ex: tuple[list[int], float], time_quad_ex: tuple[list[int], float],
ci_env: bool, ci_env: bool,
spawn_backend: str, spawn_backend: str,
test_log: tractor.log.StackLevelAdapter,
): ):
''' '''
This also serves as a "we'd like to be this fast" smoke test This also serves as a kind of "we'd like to be this fast test".
given past empirical eval of this suite.
''' '''
non_linux: bool = (_sys := platform.system()) != 'Linux' non_linux: bool = (_sys := platform.system()) != 'Linux'
this_fast_on_linux: float = 3
this_fast = (
6 if non_linux
else this_fast_on_linux
)
# ^ XXX NOTE,
# i've noticed that tweaking the CPU governor setting
# to not "always" enable "turbo" mode can result in latency
# which causes this limit to be too little. Not sure if it'd
# be worth it to adjust the linux value based on reading the
# CPU conf from the sys?
#
# For ex, see the `auto-cpufreq` docs on such settings,
# https://github.com/AdnanHodzic/auto-cpufreq?tab=readme-ov-file#example-config-file-contents
#
# HENCE this below latency-headroom compensation logic..
from .conftest import cpu_scaling_factor
headroom: float = cpu_scaling_factor()
if headroom != 1.:
this_fast = this_fast_on_linux * headroom
test_log.warning(
f'Adding latency headroom on linux bc CPU scaling,\n'
f'headroom: {headroom}\n'
f'this_fast_on_linux: {this_fast_on_linux} -> {this_fast}\n'
)
results, diff = time_quad_ex results, diff = time_quad_ex
assert results assert results
this_fast = (
6 if non_linux
else 3
)
assert diff < this_fast assert diff < this_fast
@ -377,7 +353,7 @@ def test_not_fast_enough_quad(
assert results is None assert results is None
@tractor_test(timeout=20) @tractor_test
async def test_respawn_consumer_task( async def test_respawn_consumer_task(
reg_addr: tuple, reg_addr: tuple,
spawn_backend: str, spawn_backend: str,

View File

@ -1,5 +1,5 @@
""" """
Registrar and "local" actor api Arbiter and "local" actor api
""" """
import time import time
@ -12,11 +12,11 @@ from tractor._testing import tractor_test
@pytest.mark.trio @pytest.mark.trio
async def test_no_runtime(): async def test_no_runtime():
"""A registrar must be established before any nurseries """An arbitter must be established before any nurseries
can be created. can be created.
(In other words ``tractor.open_root_actor()`` must be (In other words ``tractor.open_root_actor()`` must be engaged at
engaged at some point?) some point?)
""" """
with pytest.raises(RuntimeError) : with pytest.raises(RuntimeError) :
async with tractor.find_actor('doggy'): async with tractor.find_actor('doggy'):
@ -25,9 +25,9 @@ async def test_no_runtime():
@tractor_test @tractor_test
async def test_self_is_registered(reg_addr): async def test_self_is_registered(reg_addr):
"Verify waiting on the registrar to register itself using the standard api." "Verify waiting on the arbiter to register itself using the standard api."
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_registrar assert actor.is_arbiter
with trio.fail_after(0.2): with trio.fail_after(0.2):
async with tractor.wait_for_actor('root') as portal: async with tractor.wait_for_actor('root') as portal:
assert portal.channel.uid[0] == 'root' assert portal.channel.uid[0] == 'root'
@ -35,11 +35,11 @@ async def test_self_is_registered(reg_addr):
@tractor_test @tractor_test
async def test_self_is_registered_localportal(reg_addr): async def test_self_is_registered_localportal(reg_addr):
"Verify waiting on the registrar to register itself using a local portal." "Verify waiting on the arbiter to register itself using a local portal."
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_registrar assert actor.is_arbiter
async with tractor.get_registry(reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
assert isinstance(portal, tractor.runtime._portal.LocalPortal) assert isinstance(portal, tractor._portal.LocalPortal)
with trio.fail_after(0.2): with trio.fail_after(0.2):
sockaddr = await portal.run_from_ns( sockaddr = await portal.run_from_ns(
@ -57,8 +57,8 @@ def test_local_actor_async_func(reg_addr):
async with tractor.open_root_actor( async with tractor.open_root_actor(
registry_addrs=[reg_addr], registry_addrs=[reg_addr],
): ):
# registrar is started in-proc if dne # arbiter is started in-proc if dne
assert tractor.current_actor().is_registrar assert tractor.current_actor().is_arbiter
for i in range(10): for i in range(10):
nums.append(i) nums.append(i)

View File

@ -17,11 +17,11 @@ from tractor._testing import (
) )
from tractor import ( from tractor import (
current_actor, current_actor,
_state,
Actor, Actor,
Context, Context,
Portal, Portal,
) )
from tractor.runtime import _state
from .conftest import ( from .conftest import (
sig_prog, sig_prog,
_INT_SIGNAL, _INT_SIGNAL,
@ -30,7 +30,7 @@ from .conftest import (
if TYPE_CHECKING: if TYPE_CHECKING:
from tractor.msg import Aid from tractor.msg import Aid
from tractor.discovery._addr import ( from tractor._addr import (
UnwrappedAddress, UnwrappedAddress,
) )
@ -53,19 +53,19 @@ def test_abort_on_sigint(
@tractor_test @tractor_test
async def test_cancel_remote_registrar( async def test_cancel_remote_arbiter(
daemon: subprocess.Popen, daemon: subprocess.Popen,
reg_addr: UnwrappedAddress, reg_addr: UnwrappedAddress,
): ):
assert not current_actor().is_registrar assert not current_actor().is_arbiter
async with tractor.get_registry(reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
await portal.cancel_actor() await portal.cancel_actor()
time.sleep(0.1) time.sleep(0.1)
# the registrar channel server is cancelled but not its main task # the arbiter channel server is cancelled but not its main task
assert daemon.returncode is None assert daemon.returncode is None
# no registrar socket should exist # no arbiter socket should exist
with pytest.raises(OSError): with pytest.raises(OSError):
async with tractor.get_registry(reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
pass pass
@ -80,7 +80,7 @@ def test_register_duplicate_name(
registry_addrs=[reg_addr], registry_addrs=[reg_addr],
) as an: ) as an:
assert not current_actor().is_registrar assert not current_actor().is_arbiter
p1 = await an.start_actor('doggy') p1 = await an.start_actor('doggy')
p2 = await an.start_actor('doggy') p2 = await an.start_actor('doggy')
@ -122,7 +122,7 @@ async def get_root_portal(
# connect back to our immediate parent which should also # connect back to our immediate parent which should also
# be the actor-tree's root. # be the actor-tree's root.
from tractor.discovery._discovery import get_root from tractor._discovery import get_root
ptl: Portal ptl: Portal
async with get_root() as ptl: async with get_root() as ptl:
root_aid: Aid = ptl.chan.aid root_aid: Aid = ptl.chan.aid

View File

@ -1,333 +0,0 @@
'''
Verify that externally registered remote actor error
types are correctly relayed, boxed, and re-raised across
IPC actor hops via `reg_err_types()`.
Also ensure that when custom error types are NOT registered
the framework indicates the lookup failure to the user.
'''
import pytest
import trio
import tractor
from tractor import (
Context,
Portal,
RemoteActorError,
)
from tractor._exceptions import (
get_err_type,
reg_err_types,
)
# -- custom app-level errors for testing --
class CustomAppError(Exception):
'''
A hypothetical user-app error that should be
boxed+relayed by `tractor` IPC when registered.
'''
class AnotherAppError(Exception):
'''
A second custom error for multi-type registration.
'''
class UnregisteredAppError(Exception):
'''
A custom error that is intentionally NEVER
registered via `reg_err_types()` so we can
verify the framework's failure indication.
'''
# -- remote-task endpoints --
@tractor.context
async def raise_custom_err(
ctx: Context,
) -> None:
'''
Remote ep that raises a `CustomAppError`
after sync-ing with the caller.
'''
await ctx.started()
raise CustomAppError(
'the app exploded remotely'
)
@tractor.context
async def raise_another_err(
ctx: Context,
) -> None:
'''
Remote ep that raises `AnotherAppError`.
'''
await ctx.started()
raise AnotherAppError(
'another app-level kaboom'
)
@tractor.context
async def raise_unreg_err(
ctx: Context,
) -> None:
'''
Remote ep that raises an `UnregisteredAppError`
which has NOT been `reg_err_types()`-registered.
'''
await ctx.started()
raise UnregisteredAppError(
'this error type is unknown to tractor'
)
# -- unit tests for the type-registry plumbing --
class TestRegErrTypesPlumbing:
'''
Low-level checks on `reg_err_types()` and
`get_err_type()` without requiring IPC.
'''
def test_unregistered_type_returns_none(self):
'''
An unregistered custom error name should yield
`None` from `get_err_type()`.
'''
result = get_err_type('CustomAppError')
assert result is None
def test_register_and_lookup(self):
'''
After `reg_err_types()`, the custom type should
be discoverable via `get_err_type()`.
'''
reg_err_types([CustomAppError])
result = get_err_type('CustomAppError')
assert result is CustomAppError
def test_register_multiple_types(self):
'''
Registering a list of types should make each
one individually resolvable.
'''
reg_err_types([
CustomAppError,
AnotherAppError,
])
assert (
get_err_type('CustomAppError')
is CustomAppError
)
assert (
get_err_type('AnotherAppError')
is AnotherAppError
)
def test_builtin_types_always_resolve(self):
'''
Builtin error types like `RuntimeError` and
`ValueError` should always be found without
any prior registration.
'''
assert (
get_err_type('RuntimeError')
is RuntimeError
)
assert (
get_err_type('ValueError')
is ValueError
)
def test_tractor_native_types_resolve(self):
'''
`tractor`-internal exc types (e.g.
`ContextCancelled`) should always resolve.
'''
assert (
get_err_type('ContextCancelled')
is tractor.ContextCancelled
)
def test_boxed_type_str_without_ipc_msg(self):
'''
When a `RemoteActorError` is constructed
without an IPC msg (and no resolvable type),
`.boxed_type_str` should return `'<unknown>'`.
'''
rae = RemoteActorError('test')
assert rae.boxed_type_str == '<unknown>'
# -- IPC-level integration tests --
def test_registered_custom_err_relayed(
debug_mode: bool,
tpt_proto: str,
):
'''
When a custom error type is registered via
`reg_err_types()` on BOTH sides of an IPC dialog,
the parent should receive a `RemoteActorError`
whose `.boxed_type` matches the original custom
error type.
'''
reg_err_types([CustomAppError])
async def main():
async with tractor.open_nursery(
debug_mode=debug_mode,
enable_transports=[tpt_proto],
) as an:
ptl: Portal = await an.start_actor(
'custom-err-raiser',
enable_modules=[__name__],
)
async with ptl.open_context(
raise_custom_err,
) as (ctx, sent):
assert not sent
try:
await ctx.wait_for_result()
except RemoteActorError as rae:
assert rae.boxed_type is CustomAppError
assert rae.src_type is CustomAppError
assert 'the app exploded remotely' in str(
rae.tb_str
)
raise
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
rae = excinfo.value
assert rae.boxed_type is CustomAppError
def test_registered_another_err_relayed(
debug_mode: bool,
tpt_proto: str,
):
'''
Same as above but for a different custom error
type to verify multi-type registration works
end-to-end over IPC.
'''
reg_err_types([AnotherAppError])
async def main():
async with tractor.open_nursery(
debug_mode=debug_mode,
enable_transports=[tpt_proto],
) as an:
ptl: Portal = await an.start_actor(
'another-err-raiser',
enable_modules=[__name__],
)
async with ptl.open_context(
raise_another_err,
) as (ctx, sent):
assert not sent
try:
await ctx.wait_for_result()
except RemoteActorError as rae:
assert (
rae.boxed_type
is AnotherAppError
)
raise
await an.cancel()
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
rae = excinfo.value
assert rae.boxed_type is AnotherAppError
def test_unregistered_err_still_relayed(
debug_mode: bool,
tpt_proto: str,
):
'''
Verify that even when a custom error type is NOT registered via
`reg_err_types()`, the remote error is still relayed as
a `RemoteActorError` with all string-level info preserved
(traceback, type name, source actor uid).
The `.boxed_type` will be `None` (type obj can't be resolved) but
`.boxed_type_str` and `.src_type_str` still report the original
type name from the IPC msg.
This documents the expected limitation: without `reg_err_types()`
the `.boxed_type` property can NOT resolve to the original Python
type.
'''
# NOTE: intentionally do NOT call
# `reg_err_types([UnregisteredAppError])`
async def main():
async with tractor.open_nursery(
debug_mode=debug_mode,
enable_transports=[tpt_proto],
) as an:
ptl: Portal = await an.start_actor(
'unreg-err-raiser',
enable_modules=[__name__],
)
async with ptl.open_context(
raise_unreg_err,
) as (ctx, sent):
assert not sent
await ctx.wait_for_result()
await an.cancel()
with pytest.raises(RemoteActorError) as excinfo:
trio.run(main)
rae = excinfo.value
# the error IS relayed even without
# registration; type obj is unresolvable but
# all string-level info is preserved.
assert rae.boxed_type is None # NOT `UnregisteredAppError`
assert rae.src_type is None
# string names survive the IPC round-trip
# via the `Error` msg fields.
assert (
rae.src_type_str
==
'UnregisteredAppError'
)
assert (
rae.boxed_type_str
==
'UnregisteredAppError'
)
# original traceback content is preserved
assert 'this error type is unknown' in rae.tb_str
assert 'UnregisteredAppError' in rae.tb_str

View File

@ -94,15 +94,15 @@ def test_runtime_vars_unset(
after the root actor-runtime exits! after the root actor-runtime exits!
''' '''
assert not tractor.runtime._state._runtime_vars['_debug_mode'] assert not tractor._state._runtime_vars['_debug_mode']
async def main(): async def main():
assert not tractor.runtime._state._runtime_vars['_debug_mode'] assert not tractor._state._runtime_vars['_debug_mode']
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=True, debug_mode=True,
): ):
assert tractor.runtime._state._runtime_vars['_debug_mode'] assert tractor._state._runtime_vars['_debug_mode']
# after runtime closure, should be reverted! # after runtime closure, should be reverted!
assert not tractor.runtime._state._runtime_vars['_debug_mode'] assert not tractor._state._runtime_vars['_debug_mode']
trio.run(main) trio.run(main)

View File

@ -110,7 +110,7 @@ def test_rpc_errors(
) as n: ) as n:
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_registrar assert actor.is_arbiter
await n.run_in_actor( await n.run_in_actor(
sleep_back_actor, sleep_back_actor,
actor_name=subactor_requests_to, actor_name=subactor_requests_to,

View File

@ -39,7 +39,7 @@ async def spawn(
): ):
# now runtime exists # now runtime exists
actor: tractor.Actor = tractor.current_actor() actor: tractor.Actor = tractor.current_actor()
assert actor.is_registrar == should_be_root assert actor.is_arbiter == should_be_root
# spawns subproc here # spawns subproc here
portal: tractor.Portal = await an.run_in_actor( portal: tractor.Portal = await an.run_in_actor(
@ -68,7 +68,7 @@ async def spawn(
assert result == 10 assert result == 10
return result return result
else: else:
assert actor.is_registrar == should_be_root assert actor.is_arbiter == should_be_root
return 10 return 10
@ -181,7 +181,7 @@ def test_loglevel_propagated_to_subactor(
async def main(): async def main():
async with tractor.open_nursery( async with tractor.open_nursery(
name='registrar', name='arbiter',
start_method=start_method, start_method=start_method,
arbiter_addr=reg_addr, arbiter_addr=reg_addr,

View File

@ -30,23 +30,21 @@ from ._streaming import (
MsgStream as MsgStream, MsgStream as MsgStream,
stream as stream, stream as stream,
) )
from .discovery._discovery import ( from ._discovery import (
get_registry as get_registry, get_registry as get_registry,
find_actor as find_actor, find_actor as find_actor,
wait_for_actor as wait_for_actor, wait_for_actor as wait_for_actor,
query_actor as query_actor, query_actor as query_actor,
) )
from .runtime._supervise import ( from ._supervise import (
open_nursery as open_nursery, open_nursery as open_nursery,
ActorNursery as ActorNursery, ActorNursery as ActorNursery,
) )
from .runtime._state import ( from ._state import (
RuntimeVars as RuntimeVars,
current_actor as current_actor, current_actor as current_actor,
current_ipc_ctx as current_ipc_ctx,
debug_mode as debug_mode,
get_runtime_vars as get_runtime_vars,
is_root_process as is_root_process, is_root_process as is_root_process,
current_ipc_ctx as current_ipc_ctx,
debug_mode as debug_mode
) )
from ._exceptions import ( from ._exceptions import (
ContextCancelled as ContextCancelled, ContextCancelled as ContextCancelled,
@ -67,10 +65,6 @@ from ._root import (
open_root_actor as open_root_actor, open_root_actor as open_root_actor,
) )
from .ipc import Channel as Channel from .ipc import Channel as Channel
from .runtime._portal import Portal as Portal from ._portal import Portal as Portal
from .runtime._runtime import Actor as Actor from ._runtime import Actor as Actor
from .discovery._registry import (
Registrar as Registrar,
Arbiter as Arbiter,
)
# from . import hilevel as hilevel # from . import hilevel as hilevel

View File

@ -27,15 +27,15 @@ from trio import (
SocketListener, SocketListener,
) )
from ..log import get_logger from .log import get_logger
from ..runtime._state import ( from ._state import (
_def_tpt_proto, _def_tpt_proto,
) )
from ..ipc._tcp import TCPAddress from .ipc._tcp import TCPAddress
from ..ipc._uds import UDSAddress from .ipc._uds import UDSAddress
if TYPE_CHECKING: if TYPE_CHECKING:
from ..runtime._runtime import Actor from ._runtime import Actor
log = get_logger() log = get_logger()

View File

@ -22,8 +22,8 @@ import argparse
from ast import literal_eval from ast import literal_eval
from .runtime._runtime import Actor from ._runtime import Actor
from .spawn._entry import _trio_main from ._entry import _trio_main
def parse_uid(arg): def parse_uid(arg):

View File

@ -97,7 +97,7 @@ from ._streaming import (
MsgStream, MsgStream,
open_stream_from_ctx, open_stream_from_ctx,
) )
from .runtime._state import ( from ._state import (
current_actor, current_actor,
debug_mode, debug_mode,
_ctxvar_Context, _ctxvar_Context,
@ -107,8 +107,8 @@ from .trionics import (
) )
# ------ - ------ # ------ - ------
if TYPE_CHECKING: if TYPE_CHECKING:
from .runtime._portal import Portal from ._portal import Portal
from .runtime._runtime import Actor from ._runtime import Actor
from .ipc._transport import MsgTransport from .ipc._transport import MsgTransport
from .devx._frame_stack import ( from .devx._frame_stack import (
CallerInfo, CallerInfo,

View File

@ -28,29 +28,29 @@ from typing import (
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from tractor.log import get_logger from tractor.log import get_logger
from ..trionics import ( from .trionics import (
gather_contexts, gather_contexts,
collapse_eg, collapse_eg,
) )
from ..ipc import _connect_chan, Channel from .ipc import _connect_chan, Channel
from ._addr import ( from ._addr import (
UnwrappedAddress, UnwrappedAddress,
Address, Address,
wrap_address wrap_address
) )
from ..runtime._portal import ( from ._portal import (
Portal, Portal,
open_portal, open_portal,
LocalPortal, LocalPortal,
) )
from ..runtime._state import ( from ._state import (
current_actor, current_actor,
_runtime_vars, _runtime_vars,
_def_tpt_proto, _def_tpt_proto,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from ..runtime._runtime import Actor from ._runtime import Actor
log = get_logger() log = get_logger()
@ -60,7 +60,7 @@ log = get_logger()
async def get_registry( async def get_registry(
addr: UnwrappedAddress|None = None, addr: UnwrappedAddress|None = None,
) -> AsyncGenerator[ ) -> AsyncGenerator[
Portal|LocalPortal|None, Portal | LocalPortal | None,
None, None,
]: ]:
''' '''
@ -72,8 +72,8 @@ async def get_registry(
''' '''
actor: Actor = current_actor() actor: Actor = current_actor()
if actor.is_registrar: if actor.is_registrar:
# we're already the registrar # we're already the arbiter
# (likely a re-entrant call from the registrar actor) # (likely a re-entrant call from the arbiter actor)
yield LocalPortal( yield LocalPortal(
actor, actor,
Channel(transport=None) Channel(transport=None)
@ -153,27 +153,21 @@ async def query_actor(
regaddr: UnwrappedAddress|None = None, regaddr: UnwrappedAddress|None = None,
) -> AsyncGenerator[ ) -> AsyncGenerator[
tuple[UnwrappedAddress|None, Portal|LocalPortal|None], UnwrappedAddress|None,
None, None,
]: ]:
''' '''
Lookup a transport address (by actor name) via querying a registrar Lookup a transport address (by actor name) via querying a registrar
listening @ `regaddr`. listening @ `regaddr`.
Yields a `tuple` of `(addr, reg_portal)` where, Returns the transport protocol (socket) address or `None` if no
- `addr` is the transport protocol (socket) address or `None` if entry under that name exists.
no entry under that name exists,
- `reg_portal` is the `Portal` (or `LocalPortal` when the
current actor is the registrar) used for the lookup (or
`None` when the peer was found locally via
`get_peer_by_name()`).
''' '''
actor: Actor = current_actor() actor: Actor = current_actor()
if ( if (
name == 'registrar' name == 'registrar'
and and actor.is_registrar
actor.is_registrar
): ):
raise RuntimeError( raise RuntimeError(
'The current actor IS the registry!?' 'The current actor IS the registry!?'
@ -181,10 +175,10 @@ async def query_actor(
maybe_peers: list[Channel]|None = get_peer_by_name(name) maybe_peers: list[Channel]|None = get_peer_by_name(name)
if maybe_peers: if maybe_peers:
yield maybe_peers[0].raddr, None yield maybe_peers[0].raddr
return return
reg_portal: Portal|LocalPortal reg_portal: Portal
regaddr: Address = wrap_address(regaddr) or actor.reg_addrs[0] regaddr: Address = wrap_address(regaddr) or actor.reg_addrs[0]
async with get_registry(regaddr) as reg_portal: async with get_registry(regaddr) as reg_portal:
# TODO: return portals to all available actors - for now # TODO: return portals to all available actors - for now
@ -194,7 +188,8 @@ async def query_actor(
'find_actor', 'find_actor',
name=name, name=name,
) )
yield addr, reg_portal yield addr
@acm @acm
async def maybe_open_portal( async def maybe_open_portal(
@ -209,48 +204,14 @@ async def maybe_open_portal(
async with query_actor( async with query_actor(
name=name, name=name,
regaddr=addr, regaddr=addr,
) as (addr, reg_portal): ) as addr:
if not addr: pass
yield None
return
try: if addr:
async with _connect_chan(addr) as chan: async with _connect_chan(addr) as chan:
async with open_portal(chan) as portal: async with open_portal(chan) as portal:
yield portal yield portal
# most likely we were unable to connect the
# transport and there is likely a stale entry in
# the registry actor's table, thus we need to
# instruct it to clear that stale entry and then
# more silently (pretend there was no reason but
# to) indicate that the target actor can't be
# contacted at that addr.
except OSError:
# NOTE: ensure we delete the stale entry
# from the registrar actor when available.
if reg_portal is not None:
uid: tuple[str, str]|None = await reg_portal.run_from_ns(
'self',
'delete_addr',
addr=addr,
)
if uid:
log.warning(
f'Deleted stale registry entry !\n'
f'addr: {addr!r}\n'
f'uid: {uid!r}\n'
)
else: else:
log.warning(
f'No registry entry found for addr: {addr!r}'
)
else:
log.warning(
f'Connection to {addr!r} failed'
f' and no registry portal available'
f' to delete stale entry.'
)
yield None yield None
@ -268,10 +229,10 @@ async def find_actor(
None, None,
]: ]:
''' '''
Ask the registrar to find actor(s) by name. Ask the arbiter to find actor(s) by name.
Returns a connected portal to the last registered Returns a connected portal to the last registered matching actor
matching actor known to the registrar. known to the arbiter.
''' '''
# optimization path, use any pre-existing peer channel # optimization path, use any pre-existing peer channel
@ -319,7 +280,7 @@ async def find_actor(
if not any(portals): if not any(portals):
if raise_on_none: if raise_on_none:
raise RuntimeError( raise RuntimeError(
f'No actor {name!r} found registered @ {registry_addrs!r}' f'No actor "{name}" found registered @ {registry_addrs}'
) )
yield None yield None
return return

View File

@ -29,19 +29,19 @@ from typing import (
import trio # type: ignore import trio # type: ignore
from ..log import ( from .log import (
get_console_log, get_console_log,
get_logger, get_logger,
) )
from ..runtime import _state from . import _state
from ..devx import ( from .devx import (
_frame_stack, _frame_stack,
pformat, pformat,
) )
# from ..msg import pretty_struct # from .msg import pretty_struct
from ..to_asyncio import run_as_asyncio_guest from .to_asyncio import run_as_asyncio_guest
from ..discovery._addr import UnwrappedAddress from ._addr import UnwrappedAddress
from ..runtime._runtime import ( from ._runtime import (
async_main, async_main,
Actor, Actor,
) )

View File

@ -43,7 +43,7 @@ from msgspec import (
ValidationError, ValidationError,
) )
from tractor.runtime._state import current_actor from tractor._state import current_actor
from tractor.log import get_logger from tractor.log import get_logger
from tractor.msg import ( from tractor.msg import (
Error, Error,
@ -187,31 +187,7 @@ _body_fields: list[str] = list(
) )
def reg_err_types( def get_err_type(type_name: str) -> BaseException|None:
exc_types: list[Type[Exception]],
) -> None:
'''
Register custom exception types for local lookup.
Such that error types can be registered by an external
`tractor`-use-app code base which are expected to be raised
remotely; enables them being re-raised on the receiver side of
some inter-actor IPC dialog.
'''
for exc_type in exc_types:
log.debug(
f'Register custom exception,\n'
f'{exc_type!r}\n'
)
setattr(
_this_mod,
exc_type.__name__,
exc_type,
)
def get_err_type(type_name: str) -> Type[BaseException]|None:
''' '''
Look up an exception type by name from the set of locally known Look up an exception type by name from the set of locally known
namespaces: namespaces:
@ -325,8 +301,7 @@ class RemoteActorError(Exception):
# also pertains to our long long oustanding issue XD # also pertains to our long long oustanding issue XD
# https://github.com/goodboy/tractor/issues/5 # https://github.com/goodboy/tractor/issues/5
self._boxed_type: BaseException = boxed_type self._boxed_type: BaseException = boxed_type
self._src_type: Type[BaseException]|None = None self._src_type: BaseException|None = None
self._src_type_resolved: bool = False
self._ipc_msg: Error|None = ipc_msg self._ipc_msg: Error|None = ipc_msg
self._extra_msgdata = extra_msgdata self._extra_msgdata = extra_msgdata
@ -435,41 +410,24 @@ class RemoteActorError(Exception):
return self._ipc_msg.src_type_str return self._ipc_msg.src_type_str
@property @property
def src_type(self) -> Type[BaseException]|None: def src_type(self) -> str:
''' '''
Error type raised by original remote faulting Error type raised by original remote faulting actor.
actor.
When the error has only been relayed a single When the error has only been relayed a single actor-hop
actor-hop this will be the same as this will be the same as the `.boxed_type`.
`.boxed_type`.
If the type can not be resolved locally (i.e.
it was not registered via `reg_err_types()`)
a warning is logged and `None` is returned;
all string-level error info (`.src_type_str`,
`.tb_str`, etc.) remains available.
''' '''
if not self._src_type_resolved: if self._src_type is None:
self._src_type_resolved = True
if self._ipc_msg is None:
return None
self._src_type = get_err_type( self._src_type = get_err_type(
self._ipc_msg.src_type_str self._ipc_msg.src_type_str
) )
if not self._src_type: if not self._src_type:
log.warning( raise TypeError(
f'Failed to lookup src error type via\n' f'Failed to lookup src error type with '
f'`tractor._exceptions.get_err_type()`:\n' f'`tractor._exceptions.get_err_type()` :\n'
f'\n' f'{self.src_type_str}'
f'`{self._ipc_msg.src_type_str}`'
f' is not registered!\n'
f'\n'
f'Call `reg_err_types()` to enable'
f' full type reconstruction.\n'
) )
return self._src_type return self._src_type
@ -477,30 +435,20 @@ class RemoteActorError(Exception):
@property @property
def boxed_type_str(self) -> str: def boxed_type_str(self) -> str:
''' '''
String-name of the (last hop's) boxed error String-name of the (last hop's) boxed error type.
type.
Falls back to the IPC-msg-encoded type-name
str when the type can not be resolved locally
(e.g. unregistered custom errors).
''' '''
# TODO, maybe support also serializing the # TODO, maybe support also serializing the
# `ExceptionGroup.exceptions: list[BaseException]` # `ExceptionGroup.exeptions: list[BaseException]` set under
# set under certain conditions? # certain conditions?
bt: Type[BaseException] = self.boxed_type bt: Type[BaseException] = self.boxed_type
if bt: if bt:
return str(bt.__name__) return str(bt.__name__)
# fallback to the str name from the IPC msg return ''
# when the type obj can't be resolved.
if self._ipc_msg:
return self._ipc_msg.boxed_type_str
return '<unknown>'
@property @property
def boxed_type(self) -> Type[BaseException]|None: def boxed_type(self) -> Type[BaseException]:
''' '''
Error type boxed by last actor IPC hop. Error type boxed by last actor IPC hop.
@ -729,22 +677,10 @@ class RemoteActorError(Exception):
failing actor's remote env. failing actor's remote env.
''' '''
# TODO: better tb insertion and all the fancier # TODO: better tb insertion and all the fancier dunder
# dunder metadata stuff as per `.__context__` # metadata stuff as per `.__context__` etc. and friends:
# etc. and friends:
# https://github.com/python-trio/trio/issues/611 # https://github.com/python-trio/trio/issues/611
src_type_ref: Type[BaseException]|None = ( src_type_ref: Type[BaseException] = self.src_type
self.src_type
)
if src_type_ref is None:
# unresolvable type: fall back to
# a `RuntimeError` preserving original
# traceback + type name.
return RuntimeError(
f'{self.src_type_str}: '
f'{self.tb_str}'
)
return src_type_ref(self.tb_str) return src_type_ref(self.tb_str)
# TODO: local recontruction of nested inception for a given # TODO: local recontruction of nested inception for a given
@ -1273,31 +1209,14 @@ def unpack_error(
if not isinstance(msg, Error): if not isinstance(msg, Error):
return None return None
# try to lookup a suitable error type from the # try to lookup a suitable error type from the local runtime
# local runtime env then use it to construct a # env then use it to construct a local instance.
# local instance. # boxed_type_str: str = error_dict['boxed_type_str']
boxed_type_str: str = msg.boxed_type_str boxed_type_str: str = msg.boxed_type_str
boxed_type: Type[BaseException]|None = get_err_type( boxed_type: Type[BaseException] = get_err_type(boxed_type_str)
boxed_type_str
)
if boxed_type is None: # retrieve the error's msg-encoded remotoe-env info
log.warning( message: str = f'remote task raised a {msg.boxed_type_str!r}\n'
f'Failed to resolve remote error type\n'
f'`{boxed_type_str}` - boxing as\n'
f'`RemoteActorError` with original\n'
f'traceback preserved.\n'
f'\n'
f'Call `reg_err_types()` to enable\n'
f'full type reconstruction.\n'
)
# retrieve the error's msg-encoded remote-env
# info
message: str = (
f'remote task raised a '
f'{msg.boxed_type_str!r}\n'
)
# TODO: do we even really need these checks for RAEs? # TODO: do we even really need these checks for RAEs?
if boxed_type_str in [ if boxed_type_str in [

View File

@ -125,7 +125,7 @@ class PatchedForkServer(ForkServer):
self._forkserver_pid = None self._forkserver_pid = None
# XXX only thing that changed! # XXX only thing that changed!
cmd = ('from tractor.spawn._forkserver_override import main; ' + cmd = ('from tractor._forkserver_override import main; ' +
'main(%d, %d, %r, **%r)') 'main(%d, %d, %r, **%r)')
if self._preload_modules: if self._preload_modules:

View File

@ -39,30 +39,30 @@ import warnings
import trio import trio
from ..trionics import ( from .trionics import (
maybe_open_nursery, maybe_open_nursery,
collapse_eg, collapse_eg,
) )
from ._state import ( from ._state import (
current_actor, current_actor,
) )
from ..ipc import Channel from .ipc import Channel
from ..log import get_logger from .log import get_logger
from ..msg import ( from .msg import (
# Error, # Error,
PayloadMsg, PayloadMsg,
NamespacePath, NamespacePath,
Return, Return,
) )
from .._exceptions import ( from ._exceptions import (
NoResult, NoResult,
TransportClosed, TransportClosed,
) )
from .._context import ( from ._context import (
Context, Context,
open_context_from_portal, open_context_from_portal,
) )
from .._streaming import ( from ._streaming import (
MsgStream, MsgStream,
) )

View File

@ -37,20 +37,19 @@ import warnings
import trio import trio
from .runtime import _runtime from . import _runtime
from .discovery._registry import Registrar
from .devx import ( from .devx import (
debug, debug,
_frame_stack, _frame_stack,
pformat as _pformat, pformat as _pformat,
) )
from .spawn import _spawn from . import _spawn
from .runtime import _state from . import _state
from . import log from . import log
from .ipc import ( from .ipc import (
_connect_chan, _connect_chan,
) )
from .discovery._addr import ( from ._addr import (
Address, Address,
UnwrappedAddress, UnwrappedAddress,
default_lo_addrs, default_lo_addrs,
@ -268,6 +267,7 @@ async def open_root_actor(
if start_method is not None: if start_method is not None:
_spawn.try_set_start_method(start_method) _spawn.try_set_start_method(start_method)
# TODO! remove this ASAP!
if arbiter_addr is not None: if arbiter_addr is not None:
warnings.warn( warnings.warn(
'`arbiter_addr` is now deprecated\n' '`arbiter_addr` is now deprecated\n'
@ -400,7 +400,7 @@ async def open_root_actor(
'registry socket(s) already bound' 'registry socket(s) already bound'
) )
# we were able to connect to a registrar # we were able to connect to an arbiter
logger.info( logger.info(
f'Registry(s) seem(s) to exist @ {ponged_addrs}' f'Registry(s) seem(s) to exist @ {ponged_addrs}'
) )
@ -453,7 +453,8 @@ async def open_root_actor(
# https://github.com/goodboy/tractor/pull/348 # https://github.com/goodboy/tractor/pull/348
# https://github.com/goodboy/tractor/issues/296 # https://github.com/goodboy/tractor/issues/296
actor = Registrar( # TODO: rename as `RootActor` or is that even necessary?
actor = _runtime.Arbiter(
name=name or 'registrar', name=name or 'registrar',
uuid=mk_uuid(), uuid=mk_uuid(),
registry_addrs=registry_addrs, registry_addrs=registry_addrs,

View File

@ -43,11 +43,11 @@ from trio import (
TaskStatus, TaskStatus,
) )
from ..ipc import Channel from .ipc import Channel
from .._context import ( from ._context import (
Context, Context,
) )
from .._exceptions import ( from ._exceptions import (
ContextCancelled, ContextCancelled,
RemoteActorError, RemoteActorError,
ModuleNotExposed, ModuleNotExposed,
@ -56,19 +56,19 @@ from .._exceptions import (
pack_error, pack_error,
unpack_error, unpack_error,
) )
from ..trionics import ( from .trionics import (
collapse_eg, collapse_eg,
is_multi_cancelled, is_multi_cancelled,
maybe_raise_from_masking_exc, maybe_raise_from_masking_exc,
) )
from ..devx import ( from .devx import (
debug, debug,
add_div, add_div,
pformat as _pformat, pformat as _pformat,
) )
from . import _state from . import _state
from ..log import get_logger from .log import get_logger
from ..msg import ( from .msg import (
current_codec, current_codec,
MsgCodec, MsgCodec,
PayloadT, PayloadT,

View File

@ -83,46 +83,46 @@ from tractor.msg import (
pretty_struct, pretty_struct,
types as msgtypes, types as msgtypes,
) )
from ..trionics import ( from .trionics import (
collapse_eg, collapse_eg,
maybe_open_nursery, maybe_open_nursery,
) )
from ..ipc import ( from .ipc import (
Channel, Channel,
# IPCServer, # causes cycles atm.. # IPCServer, # causes cycles atm..
_server, _server,
) )
from ..discovery._addr import ( from ._addr import (
UnwrappedAddress, UnwrappedAddress,
Address, Address,
# default_lo_addrs, # default_lo_addrs,
get_address_cls, get_address_cls,
wrap_address, wrap_address,
) )
from .._context import ( from ._context import (
mk_context, mk_context,
Context, Context,
) )
from ..log import get_logger from .log import get_logger
from .._exceptions import ( from ._exceptions import (
ContextCancelled, ContextCancelled,
InternalError, InternalError,
ModuleNotExposed, ModuleNotExposed,
MsgTypeError, MsgTypeError,
unpack_error, unpack_error,
) )
from ..devx import ( from .devx import (
debug, debug,
pformat as _pformat pformat as _pformat
) )
from ..discovery._discovery import get_registry from ._discovery import get_registry
from ._portal import Portal from ._portal import Portal
from . import _state from . import _state
from ..spawn import _mp_fixup_main from . import _mp_fixup_main
from . import _rpc from . import _rpc
if TYPE_CHECKING: if TYPE_CHECKING:
from ._supervise import ActorNursery # noqa from ._supervise import ActorNursery
from trio._channel import MemoryChannelState from trio._channel import MemoryChannelState
@ -175,21 +175,13 @@ class Actor:
dialog. dialog.
''' '''
is_registrar: bool = False # ugh, we need to get rid of this and replace with a "registry" sys
# https://github.com/goodboy/tractor/issues/216
is_arbiter: bool = False
@property @property
def is_arbiter(self) -> bool: def is_registrar(self) -> bool:
''' return self.is_arbiter
Deprecated, use `.is_registrar`.
'''
warnings.warn(
'`Actor.is_arbiter` is deprecated.\n'
'Use `.is_registrar` instead.',
DeprecationWarning,
stacklevel=2,
)
return self.is_registrar
@property @property
def is_root(self) -> bool: def is_root(self) -> bool:
@ -245,6 +237,7 @@ class Actor:
registry_addrs: list[Address]|None = None, registry_addrs: list[Address]|None = None,
spawn_method: str|None = None, spawn_method: str|None = None,
# TODO: remove!
arbiter_addr: UnwrappedAddress|None = None, arbiter_addr: UnwrappedAddress|None = None,
) -> None: ) -> None:
@ -294,8 +287,8 @@ class Actor:
] ]
# marked by the process spawning backend at startup # marked by the process spawning backend at startup
# will be None for the parent most process started # will be None for the parent most process started manually
# manually by the user (the "registrar") # by the user (currently called the "arbiter")
self._spawn_method: str = spawn_method self._spawn_method: str = spawn_method
# RPC state # RPC state
@ -914,7 +907,7 @@ class Actor:
# TODO! -[ ] another `Struct` for rtvs.. # TODO! -[ ] another `Struct` for rtvs..
rvs: dict[str, Any] = spawnspec._runtime_vars rvs: dict[str, Any] = spawnspec._runtime_vars
if rvs['_debug_mode']: if rvs['_debug_mode']:
from ..devx import ( from .devx import (
enable_stack_on_sig, enable_stack_on_sig,
maybe_init_greenback, maybe_init_greenback,
) )
@ -1663,7 +1656,7 @@ async def async_main(
# TODO, just read direct from ipc_server? # TODO, just read direct from ipc_server?
accept_addrs: list[UnwrappedAddress] = actor.accept_addrs accept_addrs: list[UnwrappedAddress] = actor.accept_addrs
# Register with the registrar if we're told its addr # Register with the arbiter if we're told its addr
log.runtime( log.runtime(
f'Registering `{actor.name}` => {pformat(accept_addrs)}\n' f'Registering `{actor.name}` => {pformat(accept_addrs)}\n'
# ^-TODO-^ we should instead show the maddr here^^ # ^-TODO-^ we should instead show the maddr here^^
@ -1887,8 +1880,153 @@ async def async_main(
log.runtime(teardown_report) log.runtime(teardown_report)
# Backward compat: class moved to discovery._registry # TODO: rename to `Registry` and move to `.discovery._registry`!
from ..discovery._registry import ( class Arbiter(Actor):
Registrar as Registrar, '''
) A special registrar (and for now..) `Actor` who can contact all
Arbiter = Registrar other actors within its immediate process tree and possibly keeps
a registry of others meant to be discoverable in a distributed
application. Normally the registrar is also the "root actor" and
thus always has access to the top-most-level actor (process)
nursery.
By default, the registrar is always initialized when and if no
other registrar socket addrs have been specified to runtime
init entry-points (such as `open_root_actor()` or
`open_nursery()`). Any time a new main process is launched (and
thus thus a new root actor created) and, no existing registrar
can be contacted at the provided `registry_addr`, then a new
one is always created; however, if one can be reached it is
used.
Normally a distributed app requires at least registrar per
logical host where for that given "host space" (aka localhost
IPC domain of addresses) it is responsible for making all other
host (local address) bound actors *discoverable* to external
actor trees running on remote hosts.
'''
is_arbiter = True
# TODO, implement this as a read on there existing a `._state` of
# some sort setup by whenever we impl this all as
# a `.discovery._registry.open_registry()` API
def is_registry(self) -> bool:
return self.is_arbiter
def __init__(
self,
*args,
**kwargs,
) -> None:
self._registry: dict[
tuple[str, str],
UnwrappedAddress,
] = {}
self._waiters: dict[
str,
# either an event to sync to receiving an actor uid (which
# is filled in once the actor has sucessfully registered),
# or that uid after registry is complete.
list[trio.Event | tuple[str, str]]
] = {}
super().__init__(*args, **kwargs)
async def find_actor(
self,
name: str,
) -> UnwrappedAddress|None:
for uid, addr in self._registry.items():
if name in uid:
return addr
return None
async def get_registry(
self
) -> dict[str, UnwrappedAddress]:
'''
Return current name registry.
This method is async to allow for cross-actor invocation.
'''
# NOTE: requires ``strict_map_key=False`` to the msgpack
# unpacker since we have tuples as keys (not this makes the
# arbiter suscetible to hashdos):
# https://github.com/msgpack/msgpack-python#major-breaking-changes-in-msgpack-10
return {
'.'.join(key): val
for key, val in self._registry.items()
}
async def wait_for_actor(
self,
name: str,
) -> list[UnwrappedAddress]:
'''
Wait for a particular actor to register.
This is a blocking call if no actor by the provided name is currently
registered.
'''
addrs: list[UnwrappedAddress] = []
addr: UnwrappedAddress
mailbox_info: str = 'Actor registry contact infos:\n'
for uid, addr in self._registry.items():
mailbox_info += (
f'|_uid: {uid}\n'
f'|_addr: {addr}\n\n'
)
if name == uid[0]:
addrs.append(addr)
if not addrs:
waiter = trio.Event()
self._waiters.setdefault(name, []).append(waiter)
await waiter.wait()
for uid in self._waiters[name]:
if not isinstance(uid, trio.Event):
addrs.append(self._registry[uid])
log.runtime(mailbox_info)
return addrs
async def register_actor(
self,
uid: tuple[str, str],
addr: UnwrappedAddress
) -> None:
uid = name, hash = (str(uid[0]), str(uid[1]))
waddr: Address = wrap_address(addr)
if not waddr.is_valid:
# should never be 0-dynamic-os-alloc
await debug.pause()
self._registry[uid] = addr
# pop and signal all waiter events
events = self._waiters.pop(name, [])
self._waiters.setdefault(name, []).append(uid)
for event in events:
if isinstance(event, trio.Event):
event.set()
async def unregister_actor(
self,
uid: tuple[str, str]
) -> None:
uid = (str(uid[0]), str(uid[1]))
entry: tuple = self._registry.pop(uid, None)
if entry is None:
log.warning(f'Request to de-register {uid} failed?')

View File

@ -34,11 +34,11 @@ from typing import (
import trio import trio
from trio import TaskStatus from trio import TaskStatus
from ..devx import ( from .devx import (
debug, debug,
pformat as _pformat pformat as _pformat
) )
from tractor.runtime._state import ( from tractor._state import (
current_actor, current_actor,
is_main_process, is_main_process,
is_root_process, is_root_process,
@ -46,10 +46,10 @@ from tractor.runtime._state import (
_runtime_vars, _runtime_vars,
) )
from tractor.log import get_logger from tractor.log import get_logger
from tractor.discovery._addr import UnwrappedAddress from tractor._addr import UnwrappedAddress
from tractor.runtime._portal import Portal from tractor._portal import Portal
from tractor.runtime._runtime import Actor from tractor._runtime import Actor
from ._entry import _mp_main from tractor._entry import _mp_main
from tractor._exceptions import ActorFailure from tractor._exceptions import ActorFailure
from tractor.msg import ( from tractor.msg import (
types as msgtypes, types as msgtypes,
@ -58,11 +58,11 @@ from tractor.msg import (
if TYPE_CHECKING: if TYPE_CHECKING:
from tractor.ipc import ( from ipc import (
_server, _server,
Channel, Channel,
) )
from tractor.runtime._supervise import ActorNursery from ._supervise import ActorNursery
ProcessType = TypeVar('ProcessType', mp.Process, trio.Process) ProcessType = TypeVar('ProcessType', mp.Process, trio.Process)

View File

@ -25,7 +25,6 @@ from contextvars import (
from pathlib import Path from pathlib import Path
from typing import ( from typing import (
Any, Any,
Callable,
Literal, Literal,
TYPE_CHECKING, TYPE_CHECKING,
) )
@ -33,14 +32,9 @@ from typing import (
import platformdirs import platformdirs
from trio.lowlevel import current_task from trio.lowlevel import current_task
from msgspec import (
field,
Struct,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from ._runtime import Actor from ._runtime import Actor
from .._context import Context from ._context import Context
# default IPC transport protocol settings # default IPC transport protocol settings
@ -53,70 +47,9 @@ _def_tpt_proto: TransportProtocolKey = 'tcp'
_current_actor: Actor|None = None # type: ignore # noqa _current_actor: Actor|None = None # type: ignore # noqa
_last_actor_terminated: Actor|None = None _last_actor_terminated: Actor|None = None
# TODO: mk this a `msgspec.Struct`! # TODO: mk this a `msgspec.Struct`!
# -[x] type out all fields obvi! # -[ ] type out all fields obvi!
# -[ ] (eventually) mk wire-ready for monitoring? # -[ ] (eventually) mk wire-ready for monitoring?
class RuntimeVars(Struct):
'''
Actor-(and thus process)-global runtime state.
This struct is relayed from parent to child during sub-actor
spawning and is a singleton instance per process.
Generally contains,
- root-actor indicator.
- comms-info: addrs for both (public) process/service-discovery
and in-tree contact with other actors.
- transport-layer IPC protocol server(s) settings.
- debug-mode settings for enabling sync breakpointing and any
surrounding REPL-fixture hooking.
- infected-`asyncio` via guest-mode toggle(s)/cohfig.
'''
_is_root: bool = False # bool
_root_mailbox: tuple[str, str|int] = (None, None) # tuple[str|None, str|None]
_root_addrs: list[
tuple[str, str|int],
] = [] # tuple[str|None, str|None]
# parent->chld ipc protocol caps
_enable_tpts: list[TransportProtocolKey] = field(
default_factory=lambda: [_def_tpt_proto],
)
# registrar info
_registry_addrs: list[tuple] = []
# `debug_mode: bool` settings
_debug_mode: bool = False # bool
repl_fixture: bool|Callable = False # |AbstractContextManager[bool]
# for `tractor.pause_from_sync()` & `breakpoint()` support
use_greenback: bool = False
# infected-`asyncio`-mode: `trio` running as guest.
_is_infected_aio: bool = False
def __setattr__(
self,
key,
val,
) -> None:
breakpoint()
super().__setattr__(key, val)
def update(
self,
from_dict: dict|Struct,
) -> None:
for attr, val in from_dict.items():
setattr(
self,
attr,
val,
)
_runtime_vars: dict[str, Any] = { _runtime_vars: dict[str, Any] = {
# root of actor-process tree info # root of actor-process tree info
'_is_root': False, # bool '_is_root': False, # bool
@ -140,23 +73,6 @@ _runtime_vars: dict[str, Any] = {
} }
def get_runtime_vars(
as_dict: bool = True,
) -> dict:
'''
Deliver a **copy** of the current `Actor`'s "runtime variables".
By default, for historical impl reasons, this delivers the `dict`
form, but the `RuntimeVars` struct should be utilized as possible
for future calls.
'''
if as_dict:
return dict(_runtime_vars)
return RuntimeVars(**_runtime_vars)
def last_actor() -> Actor|None: def last_actor() -> Actor|None:
''' '''
Try to return last active `Actor` singleton Try to return last active `Actor` singleton
@ -182,7 +98,7 @@ def current_actor(
_current_actor is None _current_actor is None
): ):
msg: str = 'No local actor has been initialized yet?\n' msg: str = 'No local actor has been initialized yet?\n'
from .._exceptions import NoRuntime from ._exceptions import NoRuntime
if last := last_actor(): if last := last_actor():
msg += ( msg += (
@ -248,7 +164,7 @@ def current_ipc_ctx(
not ctx not ctx
and error_on_not_set and error_on_not_set
): ):
from .._exceptions import InternalError from ._exceptions import InternalError
raise InternalError( raise InternalError(
'No IPC context has been allocated for this task yet?\n' 'No IPC context has been allocated for this task yet?\n'
f'|_{current_task()}\n' f'|_{current_task()}\n'

View File

@ -55,7 +55,7 @@ from tractor.msg import (
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from .runtime._runtime import Actor from ._runtime import Actor
from ._context import Context from ._context import Context
from .ipc import Channel from .ipc import Channel

View File

@ -30,36 +30,36 @@ import warnings
import trio import trio
from ..devx import ( from .devx import (
debug, debug,
pformat as _pformat, pformat as _pformat,
) )
from ..discovery._addr import ( from ._addr import (
UnwrappedAddress, UnwrappedAddress,
mk_uuid, mk_uuid,
) )
from ._state import current_actor, is_main_process from ._state import current_actor, is_main_process
from ..log import get_logger, get_loglevel from .log import get_logger, get_loglevel
from ._runtime import Actor from ._runtime import Actor
from ._portal import Portal from ._portal import Portal
from ..trionics import ( from .trionics import (
is_multi_cancelled, is_multi_cancelled,
collapse_eg, collapse_eg,
) )
from .._exceptions import ( from ._exceptions import (
ContextCancelled, ContextCancelled,
) )
from .._root import ( from ._root import (
open_root_actor, open_root_actor,
) )
from . import _state from . import _state
from ..spawn import _spawn from . import _spawn
if TYPE_CHECKING: if TYPE_CHECKING:
import multiprocessing as mp import multiprocessing as mp
# from ..ipc._server import IPCServer # from .ipc._server import IPCServer
from ..ipc import IPCServer from .ipc import IPCServer
log = get_logger() log = get_logger()

View File

@ -26,7 +26,9 @@ import random
from typing import ( from typing import (
Type, Type,
) )
from tractor.discovery import _addr from tractor import (
_addr,
)
def get_rando_addr( def get_rando_addr(

View File

@ -21,27 +21,17 @@ and applications.
''' '''
from functools import ( from functools import (
partial, partial,
wraps,
) )
import inspect import inspect
import platform import platform
from typing import (
Callable,
Type,
)
import pytest import pytest
import tractor import tractor
import trio import trio
import wrapt
def tractor_test( def tractor_test(fn):
wrapped: Callable|None = None,
*,
# @tractor_test(<deco-params>)
timeout:float = 30,
hide_tb: bool = True,
):
''' '''
Decorator for async test fns to decorator-wrap them as "native" Decorator for async test fns to decorator-wrap them as "native"
looking sync funcs runnable by `pytest` and auto invoked with looking sync funcs runnable by `pytest` and auto invoked with
@ -55,18 +45,8 @@ def tractor_test(
Basic deco use: Basic deco use:
--------------- ---------------
@tractor_test( @tractor_test
timeout=10, async def test_whatever():
)
async def test_whatever(
# fixture param declarations
loglevel: str,
start_method: str,
reg_addr: tuple,
tpt_proto: str,
debug_mode: bool,
):
# already inside a root-actor runtime `trio.Task`
await ... await ...
@ -75,7 +55,7 @@ def tractor_test(
If any of the following fixture are requested by the wrapped test If any of the following fixture are requested by the wrapped test
fn (via normal func-args declaration), fn (via normal func-args declaration),
- `reg_addr` (a socket addr tuple where registrar is listening) - `reg_addr` (a socket addr tuple where arbiter is listening)
- `loglevel` (logging level passed to tractor internals) - `loglevel` (logging level passed to tractor internals)
- `start_method` (subprocess spawning backend) - `start_method` (subprocess spawning backend)
@ -87,69 +67,52 @@ def tractor_test(
`tractor.open_root_actor()` funcargs. `tractor.open_root_actor()` funcargs.
''' '''
__tracebackhide__: bool = hide_tb @wraps(fn)
# handle the decorator not called with () case.
# i.e. in `wrapt` support a deco-with-optional-args,
# https://wrapt.readthedocs.io/en/master/decorators.html#decorators-with-optional-arguments
if wrapped is None:
return wrapt.PartialCallableObjectProxy(
tractor_test,
timeout=timeout,
hide_tb=hide_tb
)
@wrapt.decorator
def wrapper( def wrapper(
wrapped: Callable,
instance: object|Type|None,
args: tuple,
kwargs: dict,
):
__tracebackhide__: bool = hide_tb
# NOTE, ensure we inject any test-fn declared fixture names.
for kw in [
'reg_addr',
'loglevel',
'start_method',
'debug_mode',
'tpt_proto',
'timeout',
]:
if kw in inspect.signature(wrapped).parameters:
assert kw in kwargs
start_method = kwargs.get('start_method')
if platform.system() == "Windows":
if start_method is None:
kwargs['start_method'] = 'trio'
elif start_method != 'trio':
raise ValueError(
'ONLY the `start_method="trio"` is supported on Windows.'
)
# open a root-actor, passing certain
# `tractor`-runtime-settings, then invoke the test-fn body as
# the root-most task.
#
# https://wrapt.readthedocs.io/en/master/decorators.html#processing-function-arguments
async def _main(
*args, *args,
loglevel=None,
# runtime-settings reg_addr=None,
loglevel:str|None = None,
reg_addr:tuple|None = None,
start_method: str|None = None, start_method: str|None = None,
debug_mode: bool = False, debug_mode: bool = False,
tpt_proto: str|None = None, tpt_proto: str|None=None,
**kwargs
**kwargs,
): ):
__tracebackhide__: bool = hide_tb # __tracebackhide__ = True
with trio.fail_after(timeout): # NOTE: inject ant test func declared fixture
# names by manually checking!
if 'reg_addr' in inspect.signature(fn).parameters:
# injects test suite fixture value to test as well
# as `run()`
kwargs['reg_addr'] = reg_addr
if 'loglevel' in inspect.signature(fn).parameters:
# allows test suites to define a 'loglevel' fixture
# that activates the internal logging
kwargs['loglevel'] = loglevel
if start_method is None:
if platform.system() == "Windows":
start_method = 'trio'
if 'start_method' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['start_method'] = start_method
if 'debug_mode' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['debug_mode'] = debug_mode
if 'tpt_proto' in inspect.signature(fn).parameters:
# set of subprocess spawning backends
kwargs['tpt_proto'] = tpt_proto
if kwargs:
# use explicit root actor start
async def _main():
async with tractor.open_root_actor( async with tractor.open_root_actor(
# **kwargs,
registry_addrs=[reg_addr] if reg_addr else None, registry_addrs=[reg_addr] if reg_addr else None,
loglevel=loglevel, loglevel=loglevel,
start_method=start_method, start_method=start_method,
@ -158,31 +121,17 @@ def tractor_test(
debug_mode=debug_mode, debug_mode=debug_mode,
): ):
# invoke test-fn body IN THIS task await fn(*args, **kwargs)
await wrapped(
*args,
**kwargs,
)
funcname = wrapped.__name__ main = _main
if not inspect.iscoroutinefunction(wrapped):
raise TypeError(
f"Test-fn {funcname!r} must be an async-function !!"
)
# invoke runtime via a root task. else:
return trio.run( # use implicit root actor start
partial( main = partial(fn, *args, **kwargs)
_main,
*args,
**kwargs,
)
)
return trio.run(main)
return wrapper( return wrapper
wrapped,
)
def pytest_addoption( def pytest_addoption(
@ -230,8 +179,7 @@ def pytest_addoption(
def pytest_configure(config): def pytest_configure(config):
backend = config.option.spawn_backend backend = config.option.spawn_backend
from tractor.spawn._spawn import try_set_start_method tractor._spawn.try_set_start_method(backend)
try_set_start_method(backend)
# register custom marks to avoid warnings see, # register custom marks to avoid warnings see,
# https://docs.pytest.org/en/stable/how-to/writing_plugins.html#registering-custom-markers # https://docs.pytest.org/en/stable/how-to/writing_plugins.html#registering-custom-markers
@ -277,8 +225,7 @@ def tpt_protos(request) -> list[str]:
# XXX ensure we support the protocol by name via lookup! # XXX ensure we support the protocol by name via lookup!
for proto_key in proto_keys: for proto_key in proto_keys:
from tractor.discovery import _addr addr_type = tractor._addr._address_types[proto_key]
addr_type = _addr._address_types[proto_key]
assert addr_type.proto_key == proto_key assert addr_type.proto_key == proto_key
yield proto_keys yield proto_keys
@ -309,7 +256,7 @@ def tpt_proto(
# f'tpt-proto={proto_key!r}\n' # f'tpt-proto={proto_key!r}\n'
# ) # )
from tractor.runtime import _state from tractor import _state
if _state._def_tpt_proto != proto_key: if _state._def_tpt_proto != proto_key:
_state._def_tpt_proto = proto_key _state._def_tpt_proto = proto_key
_state._runtime_vars['_enable_tpts'] = [ _state._runtime_vars['_enable_tpts'] = [

View File

@ -45,15 +45,17 @@ from typing import (
) )
import trio import trio
from tractor.runtime import _state from tractor import (
from tractor import log as logmod _state,
log as logmod,
)
from tractor.devx import debug from tractor.devx import debug
log = logmod.get_logger() log = logmod.get_logger()
if TYPE_CHECKING: if TYPE_CHECKING:
from tractor.spawn._spawn import ProcessType from tractor._spawn import ProcessType
from tractor import ( from tractor import (
Actor, Actor,
ActorNursery, ActorNursery,

View File

@ -53,8 +53,8 @@ import trio
from tractor._exceptions import ( from tractor._exceptions import (
NoRuntime, NoRuntime,
) )
from tractor.runtime import _state from tractor import _state
from tractor.runtime._state import ( from tractor._state import (
current_actor, current_actor,
debug_mode, debug_mode,
) )
@ -76,7 +76,7 @@ from ._repl import (
if TYPE_CHECKING: if TYPE_CHECKING:
from trio.lowlevel import Task from trio.lowlevel import Task
from tractor.runtime._runtime import ( from tractor._runtime import (
Actor, Actor,
) )

View File

@ -25,7 +25,7 @@ from functools import (
import os import os
import pdbp import pdbp
from tractor.runtime._state import ( from tractor._state import (
is_root_process, is_root_process,
) )

View File

@ -27,7 +27,7 @@ from typing import (
) )
import trio import trio
from tractor.log import get_logger from tractor.log import get_logger
from tractor.runtime._state import ( from tractor._state import (
current_actor, current_actor,
is_root_process, is_root_process,
) )
@ -44,7 +44,7 @@ if TYPE_CHECKING:
from tractor.ipc import ( from tractor.ipc import (
Channel, Channel,
) )
from tractor.runtime._runtime import ( from tractor._runtime import (
Actor, Actor,
) )

View File

@ -40,7 +40,7 @@ from trio.lowlevel import (
Task, Task,
) )
from tractor._context import Context from tractor._context import Context
from tractor.runtime._state import ( from tractor._state import (
current_actor, current_actor,
debug_mode, debug_mode,
is_root_process, is_root_process,

View File

@ -55,12 +55,12 @@ import tractor
from tractor.log import get_logger from tractor.log import get_logger
from tractor.to_asyncio import run_trio_task_in_future from tractor.to_asyncio import run_trio_task_in_future
from tractor._context import Context from tractor._context import Context
from tractor.runtime import _state from tractor import _state
from tractor._exceptions import ( from tractor._exceptions import (
NoRuntime, NoRuntime,
InternalError, InternalError,
) )
from tractor.runtime._state import ( from tractor._state import (
current_actor, current_actor,
current_ipc_ctx, current_ipc_ctx,
is_root_process, is_root_process,
@ -87,7 +87,7 @@ from ..pformat import (
if TYPE_CHECKING: if TYPE_CHECKING:
from trio.lowlevel import Task from trio.lowlevel import Task
from threading import Thread from threading import Thread
from tractor.runtime._runtime import ( from tractor._runtime import (
Actor, Actor,
) )
# from ._post_mortem import BoxedMaybeException # from ._post_mortem import BoxedMaybeException

View File

@ -55,12 +55,12 @@ import tractor
from tractor.to_asyncio import run_trio_task_in_future from tractor.to_asyncio import run_trio_task_in_future
from tractor.log import get_logger from tractor.log import get_logger
from tractor._context import Context from tractor._context import Context
from tractor.runtime import _state from tractor import _state
from tractor._exceptions import ( from tractor._exceptions import (
DebugRequestError, DebugRequestError,
InternalError, InternalError,
) )
from tractor.runtime._state import ( from tractor._state import (
current_actor, current_actor,
is_root_process, is_root_process,
) )
@ -71,7 +71,7 @@ if TYPE_CHECKING:
from tractor.ipc import ( from tractor.ipc import (
IPCServer, IPCServer,
) )
from tractor.runtime._runtime import ( from tractor._runtime import (
Actor, Actor,
) )
from ._repl import ( from ._repl import (
@ -1013,7 +1013,7 @@ async def request_root_stdio_lock(
DebugStatus.req_task = current_task() DebugStatus.req_task = current_task()
req_err: BaseException|None = None req_err: BaseException|None = None
try: try:
from tractor.discovery._discovery import get_root from tractor._discovery import get_root
# NOTE: we need this to ensure that this task exits # NOTE: we need this to ensure that this task exits
# BEFORE the REPl instance raises an error like # BEFORE the REPl instance raises an error like
# `bdb.BdbQuit` directly, OW you get a trio cs stack # `bdb.BdbQuit` directly, OW you get a trio cs stack

View File

@ -1,26 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Discovery (protocols) API for automatic addressing
and location management of (service) actors.
NOTE: to avoid circular imports, this ``__init__``
does NOT eagerly import submodules. Use direct
module paths like ``tractor.discovery._addr`` or
``tractor.discovery._discovery`` instead.
'''

View File

@ -1,253 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or
# modify it under the terms of the GNU Affero General Public
# License as published by the Free Software Foundation, either
# version 3 of the License, or (at your option) any later
# version.
# This program is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
# PURPOSE. See the GNU Affero General Public License for more
# details.
# You should have received a copy of the GNU Affero General
# Public License along with this program. If not, see
# <https://www.gnu.org/licenses/>.
'''
Actor-registry for process-tree service discovery.
The `Registrar` is a special `Actor` subtype that serves as
the process-tree's name-registry, tracking actor
name-to-address mappings so peers can discover each other.
'''
from __future__ import annotations
from bidict import bidict
import trio
from ..runtime._runtime import Actor
from ._addr import (
UnwrappedAddress,
Address,
wrap_address,
)
from ..devx import debug
from ..log import get_logger
log = get_logger('tractor')
class Registrar(Actor):
'''
A special registrar `Actor` who can contact all other
actors within its immediate process tree and keeps
a registry of others meant to be discoverable in
a distributed application.
Normally the registrar is also the "root actor" and
thus always has access to the top-most-level actor
(process) nursery.
By default, the registrar is always initialized when
and if no other registrar socket addrs have been
specified to runtime init entry-points (such as
`open_root_actor()` or `open_nursery()`). Any time
a new main process is launched (and thus a new root
actor created) and, no existing registrar can be
contacted at the provided `registry_addr`, then
a new one is always created; however, if one can be
reached it is used.
Normally a distributed app requires at least one
registrar per logical host where for that given
"host space" (aka localhost IPC domain of addresses)
it is responsible for making all other host (local
address) bound actors *discoverable* to external
actor trees running on remote hosts.
'''
is_registrar = True
def is_registry(self) -> bool:
return self.is_registrar
def __init__(
self,
*args,
**kwargs,
) -> None:
self._registry: bidict[
tuple[str, str],
UnwrappedAddress,
] = bidict({})
self._waiters: dict[
str,
# either an event to sync to receiving an
# actor uid (which is filled in once the actor
# has sucessfully registered), or that uid
# after registry is complete.
list[trio.Event|tuple[str, str]]
] = {}
super().__init__(*args, **kwargs)
async def find_actor(
self,
name: str,
) -> UnwrappedAddress|None:
for uid, addr in self._registry.items():
if name in uid:
return addr
return None
async def get_registry(
self
) -> dict[str, UnwrappedAddress]:
'''
Return current name registry.
This method is async to allow for cross-actor
invocation.
'''
# NOTE: requires ``strict_map_key=False`` to the
# msgpack unpacker since we have tuples as keys
# (note this makes the registrar suscetible to
# hashdos):
# https://github.com/msgpack/msgpack-python#major-breaking-changes-in-msgpack-10
return {
'.'.join(key): val
for key, val in self._registry.items()
}
async def wait_for_actor(
self,
name: str,
) -> list[UnwrappedAddress]:
'''
Wait for a particular actor to register.
This is a blocking call if no actor by the
provided name is currently registered.
'''
addrs: list[UnwrappedAddress] = []
addr: UnwrappedAddress
mailbox_info: str = (
'Actor registry contact infos:\n'
)
for uid, addr in self._registry.items():
mailbox_info += (
f'|_uid: {uid}\n'
f'|_addr: {addr}\n\n'
)
if name == uid[0]:
addrs.append(addr)
if not addrs:
waiter = trio.Event()
self._waiters.setdefault(
name, []
).append(waiter)
await waiter.wait()
for uid in self._waiters[name]:
if not isinstance(uid, trio.Event):
addrs.append(
self._registry[uid]
)
log.runtime(mailbox_info)
return addrs
async def register_actor(
self,
uid: tuple[str, str],
addr: UnwrappedAddress
) -> None:
uid = name, hash = (
str(uid[0]),
str(uid[1]),
)
waddr: Address = wrap_address(addr)
if not waddr.is_valid:
# should never be 0-dynamic-os-alloc
await debug.pause()
# XXX NOTE, value must also be hashable AND since
# `._registry` is a `bidict` values must be unique;
# use `.forceput()` to replace any prior (stale)
# entries that might map a different uid to the same
# addr (e.g. after an unclean shutdown or
# actor-restart reusing the same address).
self._registry.forceput(uid, tuple(addr))
# pop and signal all waiter events
events = self._waiters.pop(name, [])
self._waiters.setdefault(
name, []
).append(uid)
for event in events:
if isinstance(event, trio.Event):
event.set()
async def unregister_actor(
self,
uid: tuple[str, str]
) -> None:
uid = (str(uid[0]), str(uid[1]))
entry: tuple = self._registry.pop(
uid, None
)
if entry is None:
log.warning(
f'Request to de-register'
f' {uid!r} failed?'
)
async def delete_addr(
self,
addr: tuple[str, int|str]|list[str|int],
) -> tuple[str, str]|None:
# NOTE: `addr` arrives as a `list` over IPC
# (msgpack deserializes tuples -> lists) so
# coerce to `tuple` for the bidict hash lookup.
uid: tuple[str, str]|None = (
self._registry.inverse.pop(
tuple(addr),
None,
)
)
if uid:
report: str = (
'Deleting registry-entry for,\n'
)
else:
report: str = (
'No registry entry for,\n'
)
log.warning(
report
+
f'{addr!r}@{uid!r}'
)
return uid
# Backward compat alias
Arbiter = Registrar

View File

@ -146,7 +146,7 @@ _pubtask2lock: dict[str, trio.StrictFIFOLock] = {}
def pub( def pub(
wrapped: typing.Callable|None = None, wrapped: typing.Callable | None = None,
*, *,
tasks: set[str] = set(), tasks: set[str] = set(),
): ):
@ -244,12 +244,8 @@ def pub(
task2lock[name] = trio.StrictFIFOLock() task2lock[name] = trio.StrictFIFOLock()
@wrapt.decorator @wrapt.decorator
async def wrapper( async def wrapper(agen, instance, args, kwargs):
agen,
instance,
args,
kwargs,
):
# XXX: this is used to extract arguments properly as per the # XXX: this is used to extract arguments properly as per the
# `wrapt` docs # `wrapt` docs
async def _execute( async def _execute(

View File

@ -39,7 +39,7 @@ from ._types import (
transport_from_addr, transport_from_addr,
transport_from_stream, transport_from_stream,
) )
from tractor.discovery._addr import ( from tractor._addr import (
is_wrapped_addr, is_wrapped_addr,
wrap_address, wrap_address,
Address, Address,

View File

@ -50,24 +50,26 @@ from ..devx.pformat import (
from .._exceptions import ( from .._exceptions import (
TransportClosed, TransportClosed,
) )
from ..runtime import _rpc from .. import _rpc
from ..msg import ( from ..msg import (
MsgType, MsgType,
Struct, Struct,
types as msgtypes, types as msgtypes,
) )
from ..trionics import maybe_open_nursery from ..trionics import maybe_open_nursery
from ..runtime import _state from .. import (
from .. import log _state,
from ..discovery._addr import Address log,
)
from .._addr import Address
from ._chan import Channel from ._chan import Channel
from ._transport import MsgTransport from ._transport import MsgTransport
from ._uds import UDSAddress from ._uds import UDSAddress
from ._tcp import TCPAddress from ._tcp import TCPAddress
if TYPE_CHECKING: if TYPE_CHECKING:
from ..runtime._runtime import Actor from .._runtime import Actor
from ..runtime._supervise import ActorNursery from .._supervise import ActorNursery
log = log.get_logger() log = log.get_logger()
@ -355,7 +357,7 @@ async def handle_stream_from_peer(
# and `MsgpackStream._inter_packets()` on a read from the # and `MsgpackStream._inter_packets()` on a read from the
# stream particularly when the runtime is first starting up # stream particularly when the runtime is first starting up
# inside `open_root_actor()` where there is a check for # inside `open_root_actor()` where there is a check for
# a bound listener on the registrar addr. the reset will be # a bound listener on the "arbiter" addr. the reset will be
# because the handshake was never meant took place. # because the handshake was never meant took place.
log.runtime( log.runtime(
con_status con_status
@ -968,7 +970,7 @@ class Server(Struct):
in `accept_addrs`. in `accept_addrs`.
''' '''
from ..discovery._addr import ( from .._addr import (
default_lo_addrs, default_lo_addrs,
wrap_address, wrap_address,
) )

View File

@ -54,7 +54,7 @@ from tractor.msg import (
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from tractor.discovery._addr import Address from tractor._addr import Address
log = get_logger() log = get_logger()
@ -225,7 +225,7 @@ class MsgpackTransport(MsgTransport):
# not sure entirely why we need this but without it we # not sure entirely why we need this but without it we
# seem to be getting racy failures here on # seem to be getting racy failures here on
# registrar name subs.. # arbiter/registry name subs..
trio.BrokenResourceError, trio.BrokenResourceError,
) as trans_err: ) as trans_err:

View File

@ -53,14 +53,14 @@ from tractor.log import get_logger
from tractor.ipc._transport import ( from tractor.ipc._transport import (
MsgpackTransport, MsgpackTransport,
) )
from tractor.runtime._state import ( from tractor._state import (
get_rt_dir, get_rt_dir,
current_actor, current_actor,
is_root_process, is_root_process,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from tractor.runtime._runtime import Actor from ._runtime import Actor
# Platform-specific credential passing constants # Platform-specific credential passing constants

View File

@ -47,7 +47,7 @@ import colorlog # type: ignore
# import colored_traceback.auto # ?TODO, need better config? # import colored_traceback.auto # ?TODO, need better config?
import trio import trio
from .runtime._state import current_actor from ._state import current_actor
_default_loglevel: str = 'ERROR' _default_loglevel: str = 'ERROR'

View File

@ -50,7 +50,7 @@ from tractor._exceptions import (
_mk_recv_mte, _mk_recv_mte,
pack_error, pack_error,
) )
from tractor.runtime._state import ( from tractor._state import (
current_ipc_ctx, current_ipc_ctx,
) )
from ._codec import ( from ._codec import (

View File

@ -1,26 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
The actor runtime: core machinery for the
actor-model implemented on a `trio` task runtime.
NOTE: to avoid circular imports, this ``__init__``
does NOT eagerly import submodules. Use direct
module paths like ``tractor.runtime._state`` or
``tractor.runtime._runtime`` instead.
'''

View File

@ -1,26 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Actor process spawning machinery using
multiple backends (trio, multiprocessing).
NOTE: to avoid circular imports, this ``__init__``
does NOT eagerly import submodules. Use direct
module paths like ``tractor.spawn._spawn`` or
``tractor.spawn._entry`` instead.
'''

View File

@ -43,7 +43,7 @@ from tractor._exceptions import (
AsyncioTaskExited, AsyncioTaskExited,
AsyncioCancelled, AsyncioCancelled,
) )
from tractor.runtime._state import ( from tractor._state import (
debug_mode, debug_mode,
_runtime_vars, _runtime_vars,
) )

View File

@ -35,7 +35,7 @@ from typing import (
) )
import trio import trio
from tractor.runtime._state import current_actor from tractor._state import current_actor
from tractor.log import get_logger from tractor.log import get_logger
from ._tn import maybe_open_nursery from ._tn import maybe_open_nursery
# from ._beg import collapse_eg # from ._beg import collapse_eg

View File

@ -273,11 +273,11 @@ wheels = [
[[package]] [[package]]
name = "pygments" name = "pygments"
version = "2.20.0" version = "2.19.2"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" } sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" }, { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
] ]
[[package]] [[package]]