Compare commits
No commits in common. "big_boi_docs" and "main" have entirely different histories.
big_boi_do
...
main
|
|
@ -1,38 +0,0 @@
|
||||||
# Docs TODOs
|
|
||||||
|
|
||||||
## Auto-sync README code examples with source
|
|
||||||
|
|
||||||
The `docs/README.rst` has inline code blocks that
|
|
||||||
duplicate actual example files (e.g.
|
|
||||||
`examples/infected_asyncio_echo_server.py`). Every time
|
|
||||||
the public API changes we have to manually sync both.
|
|
||||||
|
|
||||||
Sphinx's `literalinclude` directive can pull code directly
|
|
||||||
from source files:
|
|
||||||
|
|
||||||
```rst
|
|
||||||
.. literalinclude:: ../examples/infected_asyncio_echo_server.py
|
|
||||||
:language: python
|
|
||||||
:caption: examples/infected_asyncio_echo_server.py
|
|
||||||
```
|
|
||||||
|
|
||||||
Or to include only a specific function/section:
|
|
||||||
|
|
||||||
```rst
|
|
||||||
.. literalinclude:: ../examples/infected_asyncio_echo_server.py
|
|
||||||
:language: python
|
|
||||||
:pyobject: aio_echo_server
|
|
||||||
```
|
|
||||||
|
|
||||||
This way the docs always reflect the actual code without
|
|
||||||
manual syncing.
|
|
||||||
|
|
||||||
### Considerations
|
|
||||||
- `README.rst` is also rendered on GitHub/PyPI which do
|
|
||||||
NOT support `literalinclude` - so we'd need a build
|
|
||||||
step or a separate `_sphinx_readme.rst` (which already
|
|
||||||
exists at `docs/github_readme/_sphinx_readme.rst`).
|
|
||||||
- Could use a pre-commit hook or CI step to extract code
|
|
||||||
from examples into the README for GitHub rendering.
|
|
||||||
- Another option: `sphinx-autodoc` style approach where
|
|
||||||
docstrings from the actual module are pulled in.
|
|
||||||
|
|
@ -1,36 +0,0 @@
|
||||||
{
|
|
||||||
"permissions": {
|
|
||||||
"allow": [
|
|
||||||
"Bash(date *)",
|
|
||||||
"Bash(cp .claude/*)",
|
|
||||||
"Bash(git diff *)",
|
|
||||||
"Bash(git log *)",
|
|
||||||
"Bash(git status)",
|
|
||||||
"Bash(git remote:*)",
|
|
||||||
"Bash(git stash:*)",
|
|
||||||
"Bash(git mv:*)",
|
|
||||||
"Bash(git rev-parse:*)",
|
|
||||||
"Bash(test:*)",
|
|
||||||
"Bash(ls:*)",
|
|
||||||
"Bash(grep:*)",
|
|
||||||
"Bash(find:*)",
|
|
||||||
"Bash(ln:*)",
|
|
||||||
"Bash(cat:*)",
|
|
||||||
"Bash(mkdir:*)",
|
|
||||||
"Bash(gh pr:*)",
|
|
||||||
"Bash(gh api:*)",
|
|
||||||
"Bash(gh issue:*)",
|
|
||||||
"Bash(UV_PROJECT_ENVIRONMENT=py* uv sync:*)",
|
|
||||||
"Bash(UV_PROJECT_ENVIRONMENT=py* uv run:*)",
|
|
||||||
"Bash(echo EXIT:$?:*)",
|
|
||||||
"Write(.claude/*commit_msg*)",
|
|
||||||
"Write(.claude/git_commit_msg_LATEST.md)",
|
|
||||||
"Skill(run-tests)",
|
|
||||||
"Skill(close-wkt)",
|
|
||||||
"Skill(open-wkt)",
|
|
||||||
"Skill(prompt-io)"
|
|
||||||
],
|
|
||||||
"deny": [],
|
|
||||||
"ask": []
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,225 +0,0 @@
|
||||||
# Commit Message Style Guide for `tractor`
|
|
||||||
|
|
||||||
Analysis based on 500 recent commits from the `tractor` repository.
|
|
||||||
|
|
||||||
## Core Principles
|
|
||||||
|
|
||||||
Write commit messages that are technically precise yet casual in
|
|
||||||
tone. Use abbreviations and informal language while maintaining
|
|
||||||
clarity about what changed and why.
|
|
||||||
|
|
||||||
## Subject Line Format
|
|
||||||
|
|
||||||
### Length and Structure
|
|
||||||
- Target: ~50 chars with a hard-max of 67.
|
|
||||||
- Use backticks around code elements (72.2% of commits)
|
|
||||||
- Rarely use colons (5.2%), except for file prefixes
|
|
||||||
- End with '?' for uncertain changes (rare: 0.8%)
|
|
||||||
- End with '!' for important changes (rare: 2.0%)
|
|
||||||
|
|
||||||
### Opening Verbs (Present Tense)
|
|
||||||
|
|
||||||
Most common verbs from analysis:
|
|
||||||
- `Add` (14.4%) - wholly new features/functionality
|
|
||||||
- `Use` (4.4%) - adopt new approach/tool
|
|
||||||
- `Drop` (3.6%) - remove code/feature
|
|
||||||
- `Fix` (2.4%) - bug fixes
|
|
||||||
- `Move`/`Mv` (3.6%) - relocate code
|
|
||||||
- `Adjust` (2.0%) - minor tweaks
|
|
||||||
- `Update` (1.6%) - enhance existing feature
|
|
||||||
- `Bump` (1.2%) - dependency updates
|
|
||||||
- `Rename` (1.2%) - identifier changes
|
|
||||||
- `Set` (1.2%) - configuration changes
|
|
||||||
- `Handle` (1.0%) - add handling logic
|
|
||||||
- `Raise` (1.0%) - add error raising
|
|
||||||
- `Pass` (0.8%) - pass parameters/values
|
|
||||||
- `Support` (0.8%) - add support for something
|
|
||||||
- `Hide` (1.4%) - make private/internal
|
|
||||||
- `Always` (1.4%) - enforce consistent behavior
|
|
||||||
- `Mk` (1.4%) - make/create (abbreviated)
|
|
||||||
- `Start` (1.0%) - begin implementation
|
|
||||||
|
|
||||||
Other frequent verbs: `More`, `Change`, `Extend`, `Disable`, `Log`,
|
|
||||||
`Enable`, `Ensure`, `Expose`, `Allow`
|
|
||||||
|
|
||||||
### Backtick Usage
|
|
||||||
|
|
||||||
Always use backticks for:
|
|
||||||
- Module names: `trio`, `asyncio`, `msgspec`, `greenback`, `stackscope`
|
|
||||||
- Class names: `Context`, `Actor`, `Address`, `PldRx`, `SpawnSpec`
|
|
||||||
- Method names: `.pause_from_sync()`, `._pause()`, `.cancel()`
|
|
||||||
- Function names: `breakpoint()`, `collapse_eg()`, `open_root_actor()`
|
|
||||||
- Decorators: `@acm`, `@context`
|
|
||||||
- Exceptions: `Cancelled`, `TransportClosed`, `MsgTypeError`
|
|
||||||
- Keywords: `finally`, `None`, `False`
|
|
||||||
- Variable names: `tn`, `debug_mode`
|
|
||||||
- Complex expressions: `trio.Cancelled`, `asyncio.Task`
|
|
||||||
|
|
||||||
Most backticked terms in tractor:
|
|
||||||
`trio`, `asyncio`, `Context`, `.pause_from_sync()`, `tn`,
|
|
||||||
`._pause()`, `breakpoint()`, `collapse_eg()`, `Actor`, `@acm`,
|
|
||||||
`.cancel()`, `Cancelled`, `open_root_actor()`, `greenback`
|
|
||||||
|
|
||||||
### Examples
|
|
||||||
|
|
||||||
Good subject lines:
|
|
||||||
```
|
|
||||||
Add `uds` to `._multiaddr`, tweak typing
|
|
||||||
Drop `DebugStatus.shield` attr, add `.req_finished`
|
|
||||||
Use `stackscope` for all actor-tree rendered "views"
|
|
||||||
Fix `.to_asyncio` inter-task-cancellation!
|
|
||||||
Bump `ruff.toml` to target py313
|
|
||||||
Mv `load_module_from_path()` to new `._code_load` submod
|
|
||||||
Always use `tuple`-cast for singleton parent addrs
|
|
||||||
```
|
|
||||||
|
|
||||||
## Body Format
|
|
||||||
|
|
||||||
### General Structure
|
|
||||||
- 43.2% of commits have no body (simple changes)
|
|
||||||
- Use blank line after subject
|
|
||||||
- Max line length: 67 chars
|
|
||||||
- Use `-` bullets for lists (28.0% of commits)
|
|
||||||
- Rarely use `*` bullets (2.4%)
|
|
||||||
|
|
||||||
### Section Markers
|
|
||||||
|
|
||||||
Use these markers to organize longer commit bodies:
|
|
||||||
- `Also,` (most common: 26 occurrences)
|
|
||||||
- `Other,` (13 occurrences)
|
|
||||||
- `Deats,` (11 occurrences) - for implementation details
|
|
||||||
- `Further,` (7 occurrences)
|
|
||||||
- `TODO,` (3 occurrences)
|
|
||||||
- `Impl details,` (2 occurrences)
|
|
||||||
- `Notes,` (1 occurrence)
|
|
||||||
|
|
||||||
### Common Abbreviations
|
|
||||||
|
|
||||||
Use these freely (sorted by frequency):
|
|
||||||
- `msg` (63) - message
|
|
||||||
- `bg` (37) - background
|
|
||||||
- `ctx` (30) - context
|
|
||||||
- `impl` (27) - implementation
|
|
||||||
- `mod` (26) - module
|
|
||||||
- `obvi` (17) - obviously
|
|
||||||
- `tn` (16) - task name
|
|
||||||
- `fn` (15) - function
|
|
||||||
- `vs` (15) - versus
|
|
||||||
- `bc` (14) - because
|
|
||||||
- `var` (14) - variable
|
|
||||||
- `prolly` (9) - probably
|
|
||||||
- `ep` (6) - entry point
|
|
||||||
- `OW` (5) - otherwise
|
|
||||||
- `rn` (4) - right now
|
|
||||||
- `sig` (4) - signal/signature
|
|
||||||
- `deps` (3) - dependencies
|
|
||||||
- `iface` (2) - interface
|
|
||||||
- `subproc` (2) - subprocess
|
|
||||||
- `tho` (2) - though
|
|
||||||
- `ofc` (2) - of course
|
|
||||||
|
|
||||||
### Tone and Style
|
|
||||||
|
|
||||||
- Casual but technical (use `XD` for humor: 23 times)
|
|
||||||
- Use `..` for trailing thoughts (108 occurrences)
|
|
||||||
- Use `Woops,` to acknowledge mistakes (4 subject lines)
|
|
||||||
- Don't be afraid to show personality while being precise
|
|
||||||
|
|
||||||
### Example Bodies
|
|
||||||
|
|
||||||
Simple with bullets:
|
|
||||||
```
|
|
||||||
Add `multiaddr` and bump up some deps
|
|
||||||
|
|
||||||
Since we're planning to use it for (discovery)
|
|
||||||
addressing, allowing replacement of the hacky (pretend)
|
|
||||||
attempt in `tractor._multiaddr` Bp
|
|
||||||
|
|
||||||
Also pin some deps,
|
|
||||||
- make us py312+
|
|
||||||
- use `pdbp` with my frame indexing fix.
|
|
||||||
- mv to latest `xonsh` for fancy cmd/suggestion injections.
|
|
||||||
|
|
||||||
Bump lock file to match obvi!
|
|
||||||
```
|
|
||||||
|
|
||||||
With section markers:
|
|
||||||
```
|
|
||||||
Use `stackscope` for all actor-tree rendered "views"
|
|
||||||
|
|
||||||
Instead of the (much more) limited and hacky `.devx._code`
|
|
||||||
impls, move to using the new `.devx._stackscope` API which
|
|
||||||
wraps the `stackscope` project.
|
|
||||||
|
|
||||||
Deats,
|
|
||||||
- make new `stackscope.extract_stack()` wrapper
|
|
||||||
- port over frame-descing to `_stackscope.pformat_stack()`
|
|
||||||
- move `PdbREPL` to use `stackscope` render approach
|
|
||||||
- update tests for new stack output format
|
|
||||||
|
|
||||||
Also,
|
|
||||||
- tweak log formatting for consistency
|
|
||||||
- add typing hints throughout
|
|
||||||
```
|
|
||||||
|
|
||||||
## Special Patterns
|
|
||||||
|
|
||||||
### WIP Commits
|
|
||||||
Rare (0.2%) - avoid committing WIP if possible
|
|
||||||
|
|
||||||
### Merge Commits
|
|
||||||
Auto-generated (4.4%), don't worry about style
|
|
||||||
|
|
||||||
### File References
|
|
||||||
- Use `module.py` or `.submodule` style
|
|
||||||
- Rarely use `file.py:line` references (0 in analysis)
|
|
||||||
|
|
||||||
### Links
|
|
||||||
- GitHub links used sparingly (3 total)
|
|
||||||
- Prefer code references over external links
|
|
||||||
|
|
||||||
## Footer
|
|
||||||
|
|
||||||
The default footer should credit `claude` (you) for helping generate
|
|
||||||
the commit msg content:
|
|
||||||
|
|
||||||
```
|
|
||||||
(this commit msg was generated in some part by [`claude-code`][claude-code-gh])
|
|
||||||
[claude-code-gh]: https://github.com/anthropics/claude-code
|
|
||||||
```
|
|
||||||
|
|
||||||
Further, if the patch was solely or in part written
|
|
||||||
by `claude`, instead add:
|
|
||||||
|
|
||||||
```
|
|
||||||
(this patch was generated in some part by [`claude-code`][claude-code-gh])
|
|
||||||
[claude-code-gh]: https://github.com/anthropics/claude-code
|
|
||||||
```
|
|
||||||
|
|
||||||
## Summary Checklist
|
|
||||||
|
|
||||||
Before committing, verify:
|
|
||||||
- [ ] Subject line uses present tense verb
|
|
||||||
- [ ] Subject line ~50 chars (hard max 67)
|
|
||||||
- [ ] Code elements wrapped in backticks
|
|
||||||
- [ ] Body lines ≤67 chars
|
|
||||||
- [ ] Abbreviations used where natural
|
|
||||||
- [ ] Casual yet precise tone
|
|
||||||
- [ ] Section markers if body >3 paragraphs
|
|
||||||
- [ ] Technical accuracy maintained
|
|
||||||
|
|
||||||
## Analysis Metadata
|
|
||||||
|
|
||||||
```
|
|
||||||
Source: tractor repository
|
|
||||||
Commits analyzed: 500
|
|
||||||
Date range: 2019-2025
|
|
||||||
Analysis date: 2026-02-08
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
(this style guide was generated by [`claude-code`][claude-code-gh]
|
|
||||||
analyzing commit history)
|
|
||||||
|
|
||||||
[claude-code-gh]: https://github.com/anthropics/claude-code
|
|
||||||
|
|
@ -1,231 +0,0 @@
|
||||||
---
|
|
||||||
name: conc-anal
|
|
||||||
description: >
|
|
||||||
Concurrency analysis for tractor's trio-based
|
|
||||||
async primitives. Trace task scheduling across
|
|
||||||
checkpoint boundaries, identify race windows in
|
|
||||||
shared mutable state, and verify synchronization
|
|
||||||
correctness. Invoke on code segments the user
|
|
||||||
points at, OR proactively when reviewing/writing
|
|
||||||
concurrent cache, lock, or multi-task acm code.
|
|
||||||
argument-hint: "[file:line-range or function name]"
|
|
||||||
allowed-tools:
|
|
||||||
- Read
|
|
||||||
- Grep
|
|
||||||
- Glob
|
|
||||||
- Task
|
|
||||||
---
|
|
||||||
|
|
||||||
Perform a structured concurrency analysis on the
|
|
||||||
target code. This skill should be invoked:
|
|
||||||
|
|
||||||
- **On demand**: user points at a code segment
|
|
||||||
(file:lines, function name, or pastes a snippet)
|
|
||||||
- **Proactively**: when writing or reviewing code
|
|
||||||
that touches shared mutable state across trio
|
|
||||||
tasks — especially `_Cache`, locks, events, or
|
|
||||||
multi-task `@acm` lifecycle management
|
|
||||||
|
|
||||||
## 0. Identify the target
|
|
||||||
|
|
||||||
If the user provides a file:line-range or function
|
|
||||||
name, read that code. If not explicitly provided,
|
|
||||||
identify the relevant concurrent code from context
|
|
||||||
(e.g. the current diff, a failing test, or the
|
|
||||||
function under discussion).
|
|
||||||
|
|
||||||
## 1. Inventory shared mutable state
|
|
||||||
|
|
||||||
List every piece of state that is accessed by
|
|
||||||
multiple tasks. For each, note:
|
|
||||||
|
|
||||||
- **What**: the variable/dict/attr (e.g.
|
|
||||||
`_Cache.values`, `_Cache.resources`,
|
|
||||||
`_Cache.users`)
|
|
||||||
- **Scope**: class-level, module-level, or
|
|
||||||
closure-captured
|
|
||||||
- **Writers**: which tasks/code-paths mutate it
|
|
||||||
- **Readers**: which tasks/code-paths read it
|
|
||||||
- **Guarded by**: which lock/event/ordering
|
|
||||||
protects it (or "UNGUARDED" if none)
|
|
||||||
|
|
||||||
Format as a table:
|
|
||||||
|
|
||||||
```
|
|
||||||
| State | Writers | Readers | Guard |
|
|
||||||
|---------------------|-----------------|-----------------|----------------|
|
|
||||||
| _Cache.values | run_ctx, moc¹ | moc | ctx_key lock |
|
|
||||||
| _Cache.resources | run_ctx, moc | moc, run_ctx | UNGUARDED |
|
|
||||||
```
|
|
||||||
|
|
||||||
¹ `moc` = `maybe_open_context`
|
|
||||||
|
|
||||||
## 2. Map checkpoint boundaries
|
|
||||||
|
|
||||||
For each code path through the target, mark every
|
|
||||||
**checkpoint** — any `await` expression where trio
|
|
||||||
can switch to another task. Use line numbers:
|
|
||||||
|
|
||||||
```
|
|
||||||
L325: await lock.acquire() ← CHECKPOINT
|
|
||||||
L395: await service_tn.start(...) ← CHECKPOINT
|
|
||||||
L411: lock.release() ← (not a checkpoint, but changes lock state)
|
|
||||||
L414: yield (False, yielded) ← SUSPEND (caller runs)
|
|
||||||
L485: no_more_users.set() ← (wakes run_ctx, no switch yet)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key trio scheduling rules to apply:**
|
|
||||||
- `Event.set()` makes waiters *ready* but does NOT
|
|
||||||
switch immediately
|
|
||||||
- `lock.release()` is not a checkpoint
|
|
||||||
- `await sleep(0)` IS a checkpoint
|
|
||||||
- Code in `finally` blocks CAN have checkpoints
|
|
||||||
(unlike asyncio)
|
|
||||||
- `await` inside `except` blocks can be
|
|
||||||
`trio.Cancelled`-masked
|
|
||||||
|
|
||||||
## 3. Trace concurrent task schedules
|
|
||||||
|
|
||||||
Write out the **interleaved execution trace** for
|
|
||||||
the problematic scenario. Number each step and tag
|
|
||||||
which task executes it:
|
|
||||||
|
|
||||||
```
|
|
||||||
[Task A] 1. acquires lock
|
|
||||||
[Task A] 2. cache miss → allocates resources
|
|
||||||
[Task A] 3. releases lock
|
|
||||||
[Task A] 4. yields to caller
|
|
||||||
[Task A] 5. caller exits → finally runs
|
|
||||||
[Task A] 6. users-- → 0, sets no_more_users
|
|
||||||
[Task A] 7. pops lock from _Cache.locks
|
|
||||||
[run_ctx] 8. wakes from no_more_users.wait()
|
|
||||||
[run_ctx] 9. values.pop(ctx_key)
|
|
||||||
[run_ctx] 10. acm __aexit__ → CHECKPOINT
|
|
||||||
[Task B] 11. creates NEW lock (old one popped)
|
|
||||||
[Task B] 12. acquires immediately
|
|
||||||
[Task B] 13. values[ctx_key] → KeyError
|
|
||||||
[Task B] 14. resources[ctx_key] → STILL EXISTS
|
|
||||||
[Task B] 15. 💥 RuntimeError
|
|
||||||
```
|
|
||||||
|
|
||||||
Identify the **race window**: the range of steps
|
|
||||||
where state is inconsistent. In the example above,
|
|
||||||
steps 9–10 are the window (values gone, resources
|
|
||||||
still alive).
|
|
||||||
|
|
||||||
## 4. Classify the bug
|
|
||||||
|
|
||||||
Categorize what kind of concurrency issue this is:
|
|
||||||
|
|
||||||
- **TOCTOU** (time-of-check-to-time-of-use): state
|
|
||||||
changes between a check and the action based on it
|
|
||||||
- **Stale reference**: a task holds a reference to
|
|
||||||
state that another task has invalidated
|
|
||||||
- **Lifetime mismatch**: a synchronization primitive
|
|
||||||
(lock, event) has a shorter lifetime than the
|
|
||||||
state it's supposed to protect
|
|
||||||
- **Missing guard**: shared state is accessed
|
|
||||||
without any synchronization
|
|
||||||
- **Atomicity gap**: two operations that should be
|
|
||||||
atomic have a checkpoint between them
|
|
||||||
|
|
||||||
## 5. Propose fixes
|
|
||||||
|
|
||||||
For each proposed fix, provide:
|
|
||||||
|
|
||||||
- **Sketch**: pseudocode or diff showing the change
|
|
||||||
- **How it closes the window**: which step(s) from
|
|
||||||
the trace it eliminates or reorders
|
|
||||||
- **Tradeoffs**: complexity, perf, new edge cases,
|
|
||||||
impact on other code paths
|
|
||||||
- **Risk**: what could go wrong (deadlocks, new
|
|
||||||
races, cancellation issues)
|
|
||||||
|
|
||||||
Rate each fix: `[simple|moderate|complex]` impl
|
|
||||||
effort.
|
|
||||||
|
|
||||||
## 6. Output format
|
|
||||||
|
|
||||||
Structure the full analysis as:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Concurrency analysis: `<target>`
|
|
||||||
|
|
||||||
### Shared state
|
|
||||||
<table from step 1>
|
|
||||||
|
|
||||||
### Checkpoints
|
|
||||||
<list from step 2>
|
|
||||||
|
|
||||||
### Race trace
|
|
||||||
<interleaved trace from step 3>
|
|
||||||
|
|
||||||
### Classification
|
|
||||||
<bug type from step 4>
|
|
||||||
|
|
||||||
### Fixes
|
|
||||||
<proposals from step 5>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tractor-specific patterns to watch
|
|
||||||
|
|
||||||
These are known problem areas in tractor's
|
|
||||||
concurrency model. Flag them when encountered:
|
|
||||||
|
|
||||||
### `_Cache` lock vs `run_ctx` lifetime
|
|
||||||
|
|
||||||
The `_Cache.locks` entry is managed by
|
|
||||||
`maybe_open_context` callers, but `run_ctx` runs
|
|
||||||
in `service_tn` — a different task tree. Lock
|
|
||||||
pop/release in the caller's `finally` does NOT
|
|
||||||
wait for `run_ctx` to finish tearing down. Any
|
|
||||||
state that `run_ctx` cleans up in its `finally`
|
|
||||||
(e.g. `resources.pop()`) is vulnerable to
|
|
||||||
re-entry races after the lock is popped.
|
|
||||||
|
|
||||||
### `values.pop()` → acm `__aexit__` → `resources.pop()` gap
|
|
||||||
|
|
||||||
In `_Cache.run_ctx`, the inner `finally` pops
|
|
||||||
`values`, then the acm's `__aexit__` runs (which
|
|
||||||
has checkpoints), then the outer `finally` pops
|
|
||||||
`resources`. This creates a window where `values`
|
|
||||||
is gone but `resources` still exists — a classic
|
|
||||||
atomicity gap.
|
|
||||||
|
|
||||||
### Global vs per-key counters
|
|
||||||
|
|
||||||
`_Cache.users` as a single `int` (pre-fix) meant
|
|
||||||
that users of different `ctx_key`s inflated each
|
|
||||||
other's counts, preventing teardown when one key's
|
|
||||||
users hit zero. Always verify that per-key state
|
|
||||||
(`users`, `locks`) is actually keyed on `ctx_key`
|
|
||||||
and not on `fid` or some broader key.
|
|
||||||
|
|
||||||
### `Event.set()` wakes but doesn't switch
|
|
||||||
|
|
||||||
`trio.Event.set()` makes waiting tasks *ready* but
|
|
||||||
the current task continues executing until its next
|
|
||||||
checkpoint. Code between `.set()` and the next
|
|
||||||
`await` runs atomically from the scheduler's
|
|
||||||
perspective. Use this to your advantage (or watch
|
|
||||||
for bugs where code assumes the woken task runs
|
|
||||||
immediately).
|
|
||||||
|
|
||||||
### `except` block checkpoint masking
|
|
||||||
|
|
||||||
`await` expressions inside `except` handlers can
|
|
||||||
be masked by `trio.Cancelled`. If a `finally`
|
|
||||||
block runs from an `except` and contains
|
|
||||||
`lock.release()`, the release happens — but any
|
|
||||||
`await` after it in the same `except` may be
|
|
||||||
swallowed. This is why `maybe_open_context`'s
|
|
||||||
cache-miss path does `lock.release()` in a
|
|
||||||
`finally` inside the `except KeyError`.
|
|
||||||
|
|
||||||
### Cancellation in `finally`
|
|
||||||
|
|
||||||
Unlike asyncio, trio allows checkpoints in
|
|
||||||
`finally` blocks. This means `finally` cleanup
|
|
||||||
that does `await` can itself be cancelled (e.g.
|
|
||||||
by nursery shutdown). Watch for cleanup code that
|
|
||||||
assumes it will run to completion.
|
|
||||||
|
|
@ -1,241 +0,0 @@
|
||||||
# PR/Patch-Request Description Format Reference
|
|
||||||
|
|
||||||
Canonical structure for `tractor` patch-request
|
|
||||||
descriptions, designed to work across GitHub,
|
|
||||||
Gitea, SourceHut, and GitLab markdown renderers.
|
|
||||||
|
|
||||||
**Line length: wrap at 72 chars** for all prose
|
|
||||||
content (Summary bullets, Motivation paragraphs,
|
|
||||||
Scopes bullets, etc.). Fill lines *to* 72 — don't
|
|
||||||
stop short at 50-65. Only raw URLs in
|
|
||||||
reference-link definitions may exceed this.
|
|
||||||
|
|
||||||
## Template
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<!-- pr-msg-meta
|
|
||||||
branch: <branch-name>
|
|
||||||
base: <base-branch>
|
|
||||||
submitted:
|
|
||||||
github: ___
|
|
||||||
gitea: ___
|
|
||||||
srht: ___
|
|
||||||
-->
|
|
||||||
|
|
||||||
## <Title: present-tense verb + backticked code>
|
|
||||||
|
|
||||||
### Summary
|
|
||||||
- [<hash>][<hash>] Description of change ending
|
|
||||||
with period.
|
|
||||||
- [<hash>][<hash>] Another change description
|
|
||||||
ending with period.
|
|
||||||
- [<hash>][<hash>] [<hash>][<hash>] Multi-commit
|
|
||||||
change description.
|
|
||||||
|
|
||||||
### Motivation
|
|
||||||
<1-2 paragraphs: problem/limitation first,
|
|
||||||
then solution. Hard-wrap at 72 chars.>
|
|
||||||
|
|
||||||
### Scopes changed
|
|
||||||
- [<hash>][<hash>] `pkg.mod.func()` — what
|
|
||||||
changed.
|
|
||||||
* [<hash>][<hash>] Also adjusts
|
|
||||||
`.related_thing()` in same module.
|
|
||||||
- [<hash>][<hash>] `tests.test_mod` — new/changed
|
|
||||||
test coverage.
|
|
||||||
|
|
||||||
<!--
|
|
||||||
### Cross-references
|
|
||||||
Also submitted as
|
|
||||||
[github-pr][] | [gitea-pr][] | [srht-patch][].
|
|
||||||
|
|
||||||
### Links
|
|
||||||
- [relevant-issue-or-discussion](url)
|
|
||||||
- [design-doc-or-screenshot](url)
|
|
||||||
-->
|
|
||||||
|
|
||||||
(this pr content was generated in some part by
|
|
||||||
[`claude-code`][claude-code-gh])
|
|
||||||
|
|
||||||
[<hash>]: https://<service>/<owner>/<repo>/commit/<hash>
|
|
||||||
[claude-code-gh]: https://github.com/anthropics/claude-code
|
|
||||||
|
|
||||||
<!-- cross-service pr refs (fill after submit):
|
|
||||||
[github-pr]: https://github.com/<owner>/<repo>/pull/___
|
|
||||||
[gitea-pr]: https://<host>/<owner>/<repo>/pulls/___
|
|
||||||
[srht-patch]: https://git.sr.ht/~<owner>/<repo>/patches/___
|
|
||||||
-->
|
|
||||||
```
|
|
||||||
|
|
||||||
## Markdown Reference-Link Strategy
|
|
||||||
|
|
||||||
Use reference-style links for ALL commit hashes
|
|
||||||
and cross-service PR refs to ensure cross-service
|
|
||||||
compatibility:
|
|
||||||
|
|
||||||
**Inline usage** (in bullets):
|
|
||||||
```markdown
|
|
||||||
- [f3726cf9][f3726cf9] Add `reg_err_types()`
|
|
||||||
for custom exc lookup.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Definition** (bottom of document):
|
|
||||||
```markdown
|
|
||||||
[f3726cf9]: https://github.com/goodboy/tractor/commit/f3726cf9
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why reference-style?
|
|
||||||
- Keeps prose readable without long inline URLs.
|
|
||||||
- All URLs in one place — trivially swappable
|
|
||||||
per-service.
|
|
||||||
- Most git services auto-link bare SHAs anyway,
|
|
||||||
but explicit refs guarantee it works in *any*
|
|
||||||
md renderer.
|
|
||||||
- The `[hash][hash]` form is self-documenting —
|
|
||||||
display text matches the ref ID.
|
|
||||||
- Cross-service PR refs use the same mechanism:
|
|
||||||
`[github-pr][]` resolves via a ref-link def
|
|
||||||
at the bottom, trivially fillable post-submit.
|
|
||||||
|
|
||||||
## Cross-Service PR Placeholder Mechanism
|
|
||||||
|
|
||||||
The generated description includes three layers
|
|
||||||
of cross-service support, all using native md
|
|
||||||
reference-links:
|
|
||||||
|
|
||||||
### 1. Metadata comment (top of file)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<!-- pr-msg-meta
|
|
||||||
branch: remote_exc_type_registry
|
|
||||||
base: main
|
|
||||||
submitted:
|
|
||||||
github: ___
|
|
||||||
gitea: ___
|
|
||||||
srht: ___
|
|
||||||
-->
|
|
||||||
```
|
|
||||||
|
|
||||||
A YAML-ish HTML comment block. The `___`
|
|
||||||
placeholders get filled with PR/patch numbers
|
|
||||||
after submission. Machine-parseable for tooling
|
|
||||||
(e.g. `gish`) but invisible in rendered md.
|
|
||||||
|
|
||||||
### 2. Cross-references section (in body)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<!--
|
|
||||||
### Cross-references
|
|
||||||
Also submitted as
|
|
||||||
[github-pr][] | [gitea-pr][] | [srht-patch][].
|
|
||||||
-->
|
|
||||||
```
|
|
||||||
|
|
||||||
Commented out at generation time. After submitting
|
|
||||||
to multiple services, uncomment and the ref-links
|
|
||||||
resolve via the stubs at the bottom.
|
|
||||||
|
|
||||||
### 3. Ref-link stubs (bottom of file)
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
<!-- cross-service pr refs (fill after submit):
|
|
||||||
[github-pr]: https://github.com/goodboy/tractor/pull/___
|
|
||||||
[gitea-pr]: https://pikers.dev/goodboy/tractor/pulls/___
|
|
||||||
[srht-patch]: https://git.sr.ht/~goodboy/tractor/patches/___
|
|
||||||
-->
|
|
||||||
```
|
|
||||||
|
|
||||||
Commented out with `___` number placeholders.
|
|
||||||
After submission: uncomment, replace `___` with
|
|
||||||
the actual number. Each service-specific copy
|
|
||||||
fills in all services' numbers so any copy can
|
|
||||||
cross-reference the others.
|
|
||||||
|
|
||||||
### Post-submission file layout
|
|
||||||
|
|
||||||
```
|
|
||||||
pr_msg_LATEST.md # latest draft (skill root)
|
|
||||||
msgs/
|
|
||||||
20260325T002027Z_mybranch_pr_msg.md # timestamped
|
|
||||||
github/
|
|
||||||
42_pr_msg.md # github PR #42
|
|
||||||
gitea/
|
|
||||||
17_pr_msg.md # gitea PR #17
|
|
||||||
srht/
|
|
||||||
5_pr_msg.md # srht patch #5
|
|
||||||
```
|
|
||||||
|
|
||||||
Each `<service>/<num>_pr_msg.md` is a copy with:
|
|
||||||
- metadata `submitted:` fields filled in
|
|
||||||
- cross-references section uncommented
|
|
||||||
- ref-link stubs uncommented with real numbers
|
|
||||||
- all services cross-linked in each copy
|
|
||||||
|
|
||||||
This mirrors the `gish` skill's
|
|
||||||
`<backend>/<num>.md` pattern.
|
|
||||||
|
|
||||||
## Commit-Link URL Patterns by Service
|
|
||||||
|
|
||||||
| Service | Pattern |
|
|
||||||
|-----------|-------------------------------------|
|
|
||||||
| GitHub | `https://github.com/<o>/<r>/commit/<h>` |
|
|
||||||
| Gitea | `https://<host>/<o>/<r>/commit/<h>` |
|
|
||||||
| SourceHut | `https://git.sr.ht/~<o>/<r>/commit/<h>` |
|
|
||||||
| GitLab | `https://gitlab.com/<o>/<r>/-/commit/<h>` |
|
|
||||||
|
|
||||||
## PR/Patch URL Patterns by Service
|
|
||||||
|
|
||||||
| Service | Pattern |
|
|
||||||
|-----------|-------------------------------------|
|
|
||||||
| GitHub | `https://github.com/<o>/<r>/pull/<n>` |
|
|
||||||
| Gitea | `https://<host>/<o>/<r>/pulls/<n>` |
|
|
||||||
| SourceHut | `https://git.sr.ht/~<o>/<r>/patches/<n>` |
|
|
||||||
| GitLab | `https://gitlab.com/<o>/<r>/-/merge_requests/<n>` |
|
|
||||||
|
|
||||||
## Scope Naming Convention
|
|
||||||
|
|
||||||
Use Python namespace-resolution syntax for
|
|
||||||
referencing changed code scopes:
|
|
||||||
|
|
||||||
| File path | Scope reference |
|
|
||||||
|---------------------------|-------------------------------|
|
|
||||||
| `tractor/_exceptions.py` | `tractor._exceptions` |
|
|
||||||
| `tractor/_state.py` | `tractor._state` |
|
|
||||||
| `tests/test_foo.py` | `tests.test_foo` |
|
|
||||||
| Function in module | `tractor._exceptions.func()` |
|
|
||||||
| Method on class | `.RemoteActorError.src_type` |
|
|
||||||
| Class | `tractor._exceptions.RAE` |
|
|
||||||
|
|
||||||
Prefix with the package path for top-level refs;
|
|
||||||
use leading-dot shorthand (`.ClassName.method()`)
|
|
||||||
for sub-bullets where the parent module is already
|
|
||||||
established.
|
|
||||||
|
|
||||||
## Title Conventions
|
|
||||||
|
|
||||||
Same verb vocabulary as commit messages:
|
|
||||||
- `Add` — wholly new feature/API
|
|
||||||
- `Fix` — bug fix
|
|
||||||
- `Drop` — removal
|
|
||||||
- `Use` — adopt new approach
|
|
||||||
- `Move`/`Mv` — relocate code
|
|
||||||
- `Adjust` — minor tweak
|
|
||||||
- `Update` — enhance existing feature
|
|
||||||
- `Support` — add support for something
|
|
||||||
|
|
||||||
Target 50 chars, hard max 70. Always backtick
|
|
||||||
code elements.
|
|
||||||
|
|
||||||
## Tone
|
|
||||||
|
|
||||||
Casual yet technically precise — matching the
|
|
||||||
project's commit-msg style. Terse but every bullet
|
|
||||||
carries signal. Use project abbreviations freely
|
|
||||||
(msg, bg, ctx, impl, mod, obvi, fn, bc, var,
|
|
||||||
prolly, ep, etc.).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
(this format reference was generated by
|
|
||||||
[`claude-code`][claude-code-gh])
|
|
||||||
[claude-code-gh]: https://github.com/anthropics/claude-code
|
|
||||||
|
|
@ -1,358 +0,0 @@
|
||||||
---
|
|
||||||
name: run-tests
|
|
||||||
description: >
|
|
||||||
Run tractor test suite (or subsets). Use when the user wants
|
|
||||||
to run tests, verify changes, or check for regressions.
|
|
||||||
argument-hint: "[test-path-or-pattern] [--opts]"
|
|
||||||
allowed-tools:
|
|
||||||
- Bash(python -m pytest *)
|
|
||||||
- Bash(python -c *)
|
|
||||||
- Bash(python --version *)
|
|
||||||
- Bash(UV_PROJECT_ENVIRONMENT=py* uv run python *)
|
|
||||||
- Bash(UV_PROJECT_ENVIRONMENT=py* uv run pytest *)
|
|
||||||
- Bash(UV_PROJECT_ENVIRONMENT=py* uv sync *)
|
|
||||||
- Bash(UV_PROJECT_ENVIRONMENT=py* uv pip show *)
|
|
||||||
- Bash(git rev-parse *)
|
|
||||||
- Bash(ls *)
|
|
||||||
- Bash(cat *)
|
|
||||||
- Bash(jq * .pytest_cache/*)
|
|
||||||
- Read
|
|
||||||
- Grep
|
|
||||||
- Glob
|
|
||||||
- Task
|
|
||||||
- AskUserQuestion
|
|
||||||
---
|
|
||||||
|
|
||||||
Run the `tractor` test suite using `pytest`. Follow this
|
|
||||||
process:
|
|
||||||
|
|
||||||
## 1. Parse user intent
|
|
||||||
|
|
||||||
From the user's message and any arguments, determine:
|
|
||||||
|
|
||||||
- **scope**: full suite, specific file(s), specific
|
|
||||||
test(s), or a keyword pattern (`-k`).
|
|
||||||
- **transport**: which IPC transport protocol to test
|
|
||||||
against (default: `tcp`, also: `uds`).
|
|
||||||
- **options**: any extra pytest flags the user wants
|
|
||||||
(e.g. `--ll debug`, `--tpdb`, `-x`, `-v`).
|
|
||||||
|
|
||||||
If the user provides a bare path or pattern as argument,
|
|
||||||
treat it as the test target. Examples:
|
|
||||||
|
|
||||||
- `/run-tests` → full suite
|
|
||||||
- `/run-tests test_local.py` → single file
|
|
||||||
- `/run-tests test_registrar -v` → file + verbose
|
|
||||||
- `/run-tests -k cancel` → keyword filter
|
|
||||||
- `/run-tests tests/ipc/ --tpt-proto uds` → subdir + UDS
|
|
||||||
|
|
||||||
## 2. Construct the pytest command
|
|
||||||
|
|
||||||
Base command:
|
|
||||||
```
|
|
||||||
python -m pytest
|
|
||||||
```
|
|
||||||
|
|
||||||
### Default flags (always include unless user overrides):
|
|
||||||
- `-x` (stop on first failure)
|
|
||||||
- `--tb=short` (concise tracebacks)
|
|
||||||
- `--no-header` (reduce noise)
|
|
||||||
|
|
||||||
### Path resolution:
|
|
||||||
- If the user gives a bare filename like `test_local.py`,
|
|
||||||
resolve it under `tests/`.
|
|
||||||
- If the user gives a subdirectory like `ipc/`, resolve
|
|
||||||
under `tests/ipc/`.
|
|
||||||
- Glob if needed: `tests/**/test_*<pattern>*.py`
|
|
||||||
|
|
||||||
### Key pytest options for this project:
|
|
||||||
|
|
||||||
| Flag | Purpose |
|
|
||||||
|---|---|
|
|
||||||
| `--ll <level>` | Set tractor log level (e.g. `debug`, `info`, `runtime`) |
|
|
||||||
| `--tpdb` / `--debug-mode` | Enable tractor's multi-proc debugger |
|
|
||||||
| `--tpt-proto <key>` | IPC transport: `tcp` (default) or `uds` |
|
|
||||||
| `--spawn-backend <be>` | Spawn method: `trio` (default), `mp_spawn`, `mp_forkserver` |
|
|
||||||
| `-k <expr>` | pytest keyword filter |
|
|
||||||
| `-v` / `-vv` | Verbosity |
|
|
||||||
| `-s` | No output capture (useful with `--tpdb`) |
|
|
||||||
|
|
||||||
### Common combos:
|
|
||||||
```sh
|
|
||||||
# quick smoke test of core modules
|
|
||||||
python -m pytest tests/test_local.py tests/test_rpc.py -x --tb=short --no-header
|
|
||||||
|
|
||||||
# full suite, stop on first failure
|
|
||||||
python -m pytest tests/ -x --tb=short --no-header
|
|
||||||
|
|
||||||
# specific test with debug
|
|
||||||
python -m pytest tests/discovery/test_registrar.py::test_reg_then_unreg -x -s --tpdb --ll debug
|
|
||||||
|
|
||||||
# run with UDS transport
|
|
||||||
python -m pytest tests/ -x --tb=short --no-header --tpt-proto uds
|
|
||||||
|
|
||||||
# keyword filter
|
|
||||||
python -m pytest tests/ -x --tb=short --no-header -k "cancel and not slow"
|
|
||||||
```
|
|
||||||
|
|
||||||
## 3. Pre-flight: venv detection (MANDATORY)
|
|
||||||
|
|
||||||
**Always verify a `uv` venv is active before running
|
|
||||||
`python` or `pytest`.** This project uses
|
|
||||||
`UV_PROJECT_ENVIRONMENT=py<MINOR>` naming (e.g.
|
|
||||||
`py313`) — never `.venv`.
|
|
||||||
|
|
||||||
### Step 1: detect active venv
|
|
||||||
|
|
||||||
Run this check first:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
python -c "
|
|
||||||
import sys, os
|
|
||||||
venv = os.environ.get('VIRTUAL_ENV', '')
|
|
||||||
prefix = sys.prefix
|
|
||||||
print(f'VIRTUAL_ENV={venv}')
|
|
||||||
print(f'sys.prefix={prefix}')
|
|
||||||
print(f'executable={sys.executable}')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: interpret results
|
|
||||||
|
|
||||||
**Case A — venv is active** (`VIRTUAL_ENV` is set
|
|
||||||
and points to a `py<MINOR>/` dir under the project
|
|
||||||
root or worktree):
|
|
||||||
|
|
||||||
Use bare `python` / `python -m pytest` for all
|
|
||||||
commands. This is the normal, fast path.
|
|
||||||
|
|
||||||
**Case B — no venv active** (`VIRTUAL_ENV` is empty
|
|
||||||
or `sys.prefix` points to a system Python):
|
|
||||||
|
|
||||||
Use `AskUserQuestion` to ask the user:
|
|
||||||
|
|
||||||
> "No uv venv is active. Should I activate one
|
|
||||||
> via `UV_PROJECT_ENVIRONMENT=py<MINOR> uv sync`,
|
|
||||||
> or would you prefer to activate your shell venv
|
|
||||||
> first?"
|
|
||||||
|
|
||||||
Options:
|
|
||||||
1. **"Create/sync venv"** — run
|
|
||||||
`UV_PROJECT_ENVIRONMENT=py<MINOR> uv sync` where
|
|
||||||
`<MINOR>` is detected from `python --version`
|
|
||||||
(e.g. `313` for 3.13). Then use
|
|
||||||
`py<MINOR>/bin/python` for all subsequent
|
|
||||||
commands in this session.
|
|
||||||
2. **"I'll activate it myself"** — stop and let the
|
|
||||||
user `source py<MINOR>/bin/activate` or similar.
|
|
||||||
|
|
||||||
**Case C — inside a git worktree** (`git rev-parse
|
|
||||||
--git-common-dir` differs from `--git-dir`):
|
|
||||||
|
|
||||||
Verify Python resolves from the **worktree's own
|
|
||||||
venv**, not the main repo's:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
python -c "import tractor; print(tractor.__file__)"
|
|
||||||
```
|
|
||||||
|
|
||||||
If the path points outside the worktree, create a
|
|
||||||
worktree-local venv:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
UV_PROJECT_ENVIRONMENT=py<MINOR> uv sync
|
|
||||||
```
|
|
||||||
|
|
||||||
Then use `py<MINOR>/bin/python` for all commands.
|
|
||||||
|
|
||||||
**Why this matters**: without the correct venv,
|
|
||||||
subprocesses spawned by tractor resolve modules
|
|
||||||
from the wrong editable install, causing spurious
|
|
||||||
`AttributeError` / `ModuleNotFoundError`.
|
|
||||||
|
|
||||||
### Fallback: `uv run`
|
|
||||||
|
|
||||||
If the user can't or won't activate a venv, all
|
|
||||||
`python` and `pytest` commands can be prefixed
|
|
||||||
with `UV_PROJECT_ENVIRONMENT=py<MINOR> uv run`:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
# instead of: python -m pytest tests/ -x
|
|
||||||
UV_PROJECT_ENVIRONMENT=py313 uv run pytest tests/ -x
|
|
||||||
|
|
||||||
# instead of: python -c 'import tractor'
|
|
||||||
UV_PROJECT_ENVIRONMENT=py313 uv run python -c 'import tractor'
|
|
||||||
```
|
|
||||||
|
|
||||||
`uv run` auto-discovers the project and venv,
|
|
||||||
but is slower than a pre-activated venv due to
|
|
||||||
lock-file resolution on each invocation. Prefer
|
|
||||||
activating the venv when possible.
|
|
||||||
|
|
||||||
### Step 3: import + collection checks
|
|
||||||
|
|
||||||
After venv is confirmed, always run these
|
|
||||||
(especially after refactors or module moves):
|
|
||||||
|
|
||||||
```sh
|
|
||||||
# 1. package import smoke check
|
|
||||||
python -c 'import tractor; print(tractor)'
|
|
||||||
|
|
||||||
# 2. verify all tests collect (no import errors)
|
|
||||||
python -m pytest tests/ -x -q --co 2>&1 | tail -5
|
|
||||||
```
|
|
||||||
|
|
||||||
If either fails, fix the import error before running
|
|
||||||
any actual tests.
|
|
||||||
|
|
||||||
## 4. Run and report
|
|
||||||
|
|
||||||
- Run the constructed command.
|
|
||||||
- Use a timeout of **600000ms** (10min) for full suite
|
|
||||||
runs, **120000ms** (2min) for single-file runs.
|
|
||||||
- If the suite is large (full `tests/`), consider running
|
|
||||||
in the background and checking output when done.
|
|
||||||
- Use `--lf` (last-failed) to re-run only previously
|
|
||||||
failing tests when iterating on a fix.
|
|
||||||
|
|
||||||
### On failure:
|
|
||||||
- Show the failing test name(s) and short traceback.
|
|
||||||
- If the failure looks related to recent changes, point
|
|
||||||
out the likely cause and suggest a fix.
|
|
||||||
- **Check the known-flaky list** (section 8) before
|
|
||||||
investigating — don't waste time on pre-existing
|
|
||||||
timeout issues.
|
|
||||||
- **NEVER auto-commit fixes.** If you apply a code fix
|
|
||||||
during test iteration, leave it unstaged. Tell the
|
|
||||||
user what changed and suggest they review the
|
|
||||||
worktree state, stage files manually, and use
|
|
||||||
`/commit-msg` (inline or in a separate session) to
|
|
||||||
generate the commit message. The human drives all
|
|
||||||
`git add` and `git commit` operations.
|
|
||||||
|
|
||||||
### On success:
|
|
||||||
- Report the pass/fail/skip counts concisely.
|
|
||||||
|
|
||||||
## 5. Test directory layout (reference)
|
|
||||||
|
|
||||||
```
|
|
||||||
tests/
|
|
||||||
├── conftest.py # root fixtures, daemon, signals
|
|
||||||
├── devx/ # debugger/tooling tests
|
|
||||||
├── ipc/ # transport protocol tests
|
|
||||||
├── msg/ # messaging layer tests
|
|
||||||
├── discovery/ # discovery subsystem tests
|
|
||||||
│ ├── test_multiaddr.py # multiaddr construction
|
|
||||||
│ └── test_registrar.py # registry/discovery protocol
|
|
||||||
├── test_local.py # registrar + local actor basics
|
|
||||||
├── test_rpc.py # RPC error handling
|
|
||||||
├── test_spawning.py # subprocess spawning
|
|
||||||
├── test_multi_program.py # multi-process tree tests
|
|
||||||
├── test_cancellation.py # cancellation semantics
|
|
||||||
├── test_context_stream_semantics.py # ctx streaming
|
|
||||||
├── test_inter_peer_cancellation.py # peer cancel
|
|
||||||
├── test_infected_asyncio.py # trio-in-asyncio
|
|
||||||
└── ...
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. Change-type → test mapping
|
|
||||||
|
|
||||||
After modifying specific modules, run the corresponding
|
|
||||||
test subset first for fast feedback:
|
|
||||||
|
|
||||||
| Changed module(s) | Run these tests first |
|
|
||||||
|---|---|
|
|
||||||
| `runtime/_runtime.py`, `runtime/_state.py` | `test_local.py test_rpc.py test_spawning.py test_root_runtime.py` |
|
|
||||||
| `discovery/` (`_registry`, `_discovery`, `_addr`) | `tests/discovery/ test_multi_program.py test_local.py` |
|
|
||||||
| `_context.py`, `_streaming.py` | `test_context_stream_semantics.py test_advanced_streaming.py` |
|
|
||||||
| `ipc/` (`_chan`, `_server`, `_transport`) | `tests/ipc/ test_2way.py` |
|
|
||||||
| `runtime/_portal.py`, `runtime/_rpc.py` | `test_rpc.py test_cancellation.py` |
|
|
||||||
| `spawn/` (`_spawn`, `_entry`) | `test_spawning.py test_multi_program.py` |
|
|
||||||
| `devx/debug/` | `tests/devx/test_debugger.py` (slow!) |
|
|
||||||
| `to_asyncio.py` | `test_infected_asyncio.py test_root_infect_asyncio.py` |
|
|
||||||
| `msg/` | `tests/msg/` |
|
|
||||||
| `_exceptions.py` | `test_remote_exc_relay.py test_inter_peer_cancellation.py` |
|
|
||||||
| `runtime/_supervise.py` | `test_cancellation.py test_spawning.py` |
|
|
||||||
|
|
||||||
## 7. Quick-check shortcuts
|
|
||||||
|
|
||||||
### After refactors (fastest first-pass):
|
|
||||||
```sh
|
|
||||||
# import + collect check
|
|
||||||
python -c 'import tractor' && python -m pytest tests/ -x -q --co 2>&1 | tail -3
|
|
||||||
|
|
||||||
# core subset (~10s)
|
|
||||||
python -m pytest tests/test_local.py tests/test_rpc.py tests/test_spawning.py tests/discovery/test_registrar.py -x --tb=short --no-header
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inspect last failures (without re-running):
|
|
||||||
|
|
||||||
When the user asks "what failed?", "show failures",
|
|
||||||
or wants to check the last-failed set before
|
|
||||||
re-running — read the pytest cache directly. This
|
|
||||||
is instant and avoids test collection overhead.
|
|
||||||
|
|
||||||
```sh
|
|
||||||
python -c "
|
|
||||||
import json, pathlib, sys
|
|
||||||
p = pathlib.Path('.pytest_cache/v/cache/lastfailed')
|
|
||||||
if not p.exists():
|
|
||||||
print('No lastfailed cache found.'); sys.exit()
|
|
||||||
data = json.loads(p.read_text())
|
|
||||||
# filter to real test node IDs (ignore junk
|
|
||||||
# entries that can accumulate from system paths)
|
|
||||||
tests = sorted(k for k in data if k.startswith('tests/'))
|
|
||||||
if not tests:
|
|
||||||
print('No failures recorded.')
|
|
||||||
else:
|
|
||||||
print(f'{len(tests)} last-failed test(s):')
|
|
||||||
for t in tests:
|
|
||||||
print(f' {t}')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why not `--cache-show` or `--co --lf`?**
|
|
||||||
|
|
||||||
- `pytest --cache-show 'cache/lastfailed'` works
|
|
||||||
but dumps raw dict repr including junk entries
|
|
||||||
(stale system paths that leak into the cache).
|
|
||||||
- `pytest --co --lf` actually *collects* tests which
|
|
||||||
triggers import resolution and is slow (~0.5s+).
|
|
||||||
Worse, when cached node IDs don't exactly match
|
|
||||||
current parametrize IDs (e.g. param names changed
|
|
||||||
between runs), pytest falls back to collecting
|
|
||||||
the *entire file*, giving false positives.
|
|
||||||
- Reading the JSON directly is instant, filterable
|
|
||||||
to `tests/`-prefixed entries, and shows exactly
|
|
||||||
what pytest recorded — no interpretation.
|
|
||||||
|
|
||||||
**After inspecting**, re-run the failures:
|
|
||||||
```sh
|
|
||||||
python -m pytest --lf -x --tb=short --no-header
|
|
||||||
```
|
|
||||||
|
|
||||||
### Full suite in background:
|
|
||||||
When core tests pass and you want full coverage while
|
|
||||||
continuing other work, run in background:
|
|
||||||
```sh
|
|
||||||
python -m pytest tests/ -x --tb=short --no-header -q
|
|
||||||
```
|
|
||||||
(use `run_in_background=true` on the Bash tool)
|
|
||||||
|
|
||||||
## 8. Known flaky tests
|
|
||||||
|
|
||||||
These tests have **pre-existing** timing/environment
|
|
||||||
sensitivity. If they fail with `TooSlowError` or
|
|
||||||
pexpect `TIMEOUT`, they are almost certainly NOT caused
|
|
||||||
by your changes — note them and move on.
|
|
||||||
|
|
||||||
| Test | Typical error | Notes |
|
|
||||||
|---|---|---|
|
|
||||||
| `devx/test_debugger.py::test_multi_nested_subactors_error_through_nurseries` | pexpect TIMEOUT | Debugger pexpect timing |
|
|
||||||
| `test_cancellation.py::test_cancel_via_SIGINT_other_task` | TooSlowError | Signal handling race |
|
|
||||||
| `test_inter_peer_cancellation.py::test_peer_spawns_and_cancels_service_subactor` | TooSlowError | Async timing (both param variants) |
|
|
||||||
| `test_docs_examples.py::test_example[we_are_processes.py]` | `assert None == 0` | `__main__` missing `__file__` in subproc |
|
|
||||||
|
|
||||||
**Rule of thumb**: if a test fails with `TooSlowError`,
|
|
||||||
`trio.TooSlowError`, or `pexpect.TIMEOUT` and you didn't
|
|
||||||
touch the relevant code path, it's flaky — skip it.
|
|
||||||
|
|
@ -1,18 +1,10 @@
|
||||||
name: CI
|
name: CI
|
||||||
|
|
||||||
# NOTE distilled from,
|
|
||||||
# https://github.com/orgs/community/discussions/26276
|
|
||||||
on:
|
on:
|
||||||
# any time a new update to 'main'
|
# any time someone pushes a new branch to origin
|
||||||
push:
|
push:
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
# for on all (forked) PRs to repo
|
# Allows you to run this workflow manually from the Actions tab
|
||||||
# NOTE, use a draft PR if you just want CI triggered..
|
|
||||||
pull_request:
|
|
||||||
|
|
||||||
# to run workflow manually from the "Actions" tab
|
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
|
|
@ -82,44 +74,24 @@ jobs:
|
||||||
# run: mypy tractor/ --ignore-missing-imports --show-traceback
|
# run: mypy tractor/ --ignore-missing-imports --show-traceback
|
||||||
|
|
||||||
|
|
||||||
testing:
|
testing-linux:
|
||||||
name: '${{ matrix.os }} Python${{ matrix.python-version }} spawn_backend=${{ matrix.spawn_backend }} tpt_proto=${{ matrix.tpt_proto }}'
|
name: '${{ matrix.os }} Python ${{ matrix.python }} - ${{ matrix.spawn_backend }}'
|
||||||
timeout-minutes: 16
|
timeout-minutes: 10
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
|
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
os: [
|
os: [ubuntu-latest]
|
||||||
ubuntu-latest,
|
python-version: ['3.13']
|
||||||
macos-latest,
|
|
||||||
]
|
|
||||||
python-version: [
|
|
||||||
'3.13',
|
|
||||||
# '3.14',
|
|
||||||
]
|
|
||||||
spawn_backend: [
|
spawn_backend: [
|
||||||
'trio',
|
'trio',
|
||||||
# 'mp_spawn',
|
# 'mp_spawn',
|
||||||
# 'mp_forkserver',
|
# 'mp_forkserver',
|
||||||
# ?TODO^ is it worth it to get these running again?
|
|
||||||
#
|
|
||||||
# - [ ] next-gen backends, on 3.13+
|
|
||||||
# https://github.com/goodboy/tractor/issues/379
|
|
||||||
# 'subinterpreter',
|
|
||||||
# 'subint',
|
|
||||||
]
|
]
|
||||||
tpt_proto: [
|
|
||||||
'tcp',
|
|
||||||
'uds',
|
|
||||||
]
|
|
||||||
# https://github.com/orgs/community/discussions/26253#discussioncomment-3250989
|
|
||||||
exclude:
|
|
||||||
# don't do UDS run on macOS (for now)
|
|
||||||
- os: macos-latest
|
|
||||||
tpt_proto: 'uds'
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
|
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
- name: 'Install uv + py-${{ matrix.python-version }}'
|
- name: 'Install uv + py-${{ matrix.python-version }}'
|
||||||
|
|
@ -146,11 +118,7 @@ jobs:
|
||||||
run: uv tree
|
run: uv tree
|
||||||
|
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: >
|
run: uv run pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rsx
|
||||||
uv run
|
|
||||||
pytest tests/ -rsx
|
|
||||||
--spawn-backend=${{ matrix.spawn_backend }}
|
|
||||||
--tpt-proto=${{ matrix.tpt_proto }}
|
|
||||||
|
|
||||||
# XXX legacy NOTE XXX
|
# XXX legacy NOTE XXX
|
||||||
#
|
#
|
||||||
|
|
|
||||||
|
|
@ -102,69 +102,3 @@ venv.bak/
|
||||||
|
|
||||||
# mypy
|
# mypy
|
||||||
.mypy_cache/
|
.mypy_cache/
|
||||||
|
|
||||||
# all files under
|
|
||||||
.git/
|
|
||||||
|
|
||||||
# require very explicit staging for anything we **really**
|
|
||||||
# want put/kept in repo.
|
|
||||||
notes_to_self/
|
|
||||||
snippets/
|
|
||||||
|
|
||||||
# ------- AI shiz -------
|
|
||||||
# `ai.skillz` symlinks,
|
|
||||||
# (machine-local, deploy via deploy-skill.sh)
|
|
||||||
.claude/skills/py-codestyle
|
|
||||||
.claude/skills/close-wkt
|
|
||||||
.claude/skills/plan-io
|
|
||||||
.claude/skills/prompt-io
|
|
||||||
.claude/skills/resolve-conflicts
|
|
||||||
.claude/skills/inter-skill-review
|
|
||||||
|
|
||||||
# /open-wkt specifics
|
|
||||||
.claude/skills/open-wkt
|
|
||||||
.claude/wkts/
|
|
||||||
claude_wkts
|
|
||||||
|
|
||||||
# /code-review-changes specifics
|
|
||||||
.claude/skills/code-review-changes
|
|
||||||
# review-skill ephemeral ctx (per-PR, single-use)
|
|
||||||
.claude/review_context.md
|
|
||||||
.claude/review_regression.md
|
|
||||||
|
|
||||||
# /pr-msg specifics
|
|
||||||
.claude/skills/pr-msg/*
|
|
||||||
# repo-specific
|
|
||||||
!.claude/skills/pr-msg/format-reference.md
|
|
||||||
# XXX, so u can nvim-telescope this file.
|
|
||||||
# !.claude/skills/pr-msg/pr_msg_LATEST.md
|
|
||||||
|
|
||||||
# /commit-msg specifics
|
|
||||||
# - any commit-msg gen tmp files
|
|
||||||
.claude/*_commit_*.md
|
|
||||||
.claude/*_commit*.txt
|
|
||||||
.claude/skills/commit-msg/*
|
|
||||||
!.claude/skills/commit-msg/style-duie-reference.md
|
|
||||||
|
|
||||||
# use prompt-io instead?
|
|
||||||
.claude/plans
|
|
||||||
|
|
||||||
# nix develop --profile .nixdev
|
|
||||||
.nixdev*
|
|
||||||
|
|
||||||
# :Obsession .
|
|
||||||
Session.vim
|
|
||||||
|
|
||||||
# `gish` local `.md`-files
|
|
||||||
# TODO? better all around automation!
|
|
||||||
# -[ ] it'd be handy to also commit and sync with wtv git service?
|
|
||||||
# -[ ] everything should be put under a `.gish/` no?
|
|
||||||
gitea/
|
|
||||||
gh/
|
|
||||||
|
|
||||||
# ------ macOS ------
|
|
||||||
# Finder metadata
|
|
||||||
**/.DS_Store
|
|
||||||
|
|
||||||
# LLM conversations that should remain private
|
|
||||||
docs/conversations/
|
|
||||||
|
|
|
||||||
|
|
@ -1,54 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
session: (ad-hoc, not tracked via conf.toml)
|
|
||||||
timestamp: 2026-04-06T17:28:48Z
|
|
||||||
git_ref: 02b2ef1
|
|
||||||
scope: tests
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260406T172848Z_02b2ef1_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
User asked to extend `tests/test_resource_cache.py` with a test
|
|
||||||
that reproduces the edge case fixed in commit `02b2ef18` (per-key
|
|
||||||
locking+user tracking in `maybe_open_context()`). The bug was
|
|
||||||
originally triggered in piker's `brokerd.kraken` backend where the
|
|
||||||
same `acm_func` was called with different kwargs, and the old
|
|
||||||
global `_Cache.users` counter caused:
|
|
||||||
|
|
||||||
- teardown skipped for one `ctx_key` bc another key's users kept
|
|
||||||
the global count > 0
|
|
||||||
- re-entry hitting `assert not resources.get(ctx_key)` during the
|
|
||||||
teardown window
|
|
||||||
|
|
||||||
User requested a test that would fail under the old code and pass
|
|
||||||
with the fix.
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Designed and implemented `test_per_ctx_key_resource_lifecycle`
|
|
||||||
which verifies per-`ctx_key` resource isolation by:
|
|
||||||
|
|
||||||
1. Holding resource `'a'` open in a bg task
|
|
||||||
2. Opening+closing resource `'b'` (same `acm_func`, different
|
|
||||||
kwargs) while `'a'` is still alive
|
|
||||||
3. Re-opening `'b'` and asserting cache MISS — proving `'b'` was
|
|
||||||
torn down independently despite `'a'` keeping its own user
|
|
||||||
count > 0
|
|
||||||
|
|
||||||
With the old global counter, phase 3 would produce a stale cache
|
|
||||||
HIT (leaked resource) or crash on the assert.
|
|
||||||
|
|
||||||
Also added a trivial `acm_with_resource(resource_id)` ACM helper
|
|
||||||
at module level.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
- `tests/test_resource_cache.py` — add `acm_with_resource` ACM +
|
|
||||||
`test_per_ctx_key_resource_lifecycle` test fn
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
None — committed as generated (pending user review).
|
|
||||||
|
|
@ -1,57 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
session: (ad-hoc, not tracked via conf.toml)
|
|
||||||
timestamp: 2026-04-06T19:31:25Z
|
|
||||||
git_ref: 85f9c5d
|
|
||||||
scope: tests
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260406T193125Z_85f9c5d_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
User asked to reproduce the exact `assert not resources.get(ctx_key)`
|
|
||||||
crash originally seen in piker's `brokerd.kraken` backend via
|
|
||||||
`open_cached_client('kraken')`. Key constraints from user:
|
|
||||||
|
|
||||||
- In piker, kwargs were the **same** (empty) so all callers
|
|
||||||
share one `ctx_key = (fid, ())`
|
|
||||||
- The root issue is `_Cache.locks` being indexed by `fid`
|
|
||||||
rather than `ctx_key`, plus the race window between
|
|
||||||
`values.pop()` and `resources.pop()` in `_Cache.run_ctx`
|
|
||||||
(the acm `__aexit__` has checkpoints in between)
|
|
||||||
- Mark the test `@pytest.mark.xfail` so it can be committed
|
|
||||||
against the unpatched code and later adjusted once the fix
|
|
||||||
from `9e49eddd` is applied
|
|
||||||
|
|
||||||
User pointed to piker's `open_cached_client('kraken')` usage
|
|
||||||
as the real-world pattern to model.
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Added `test_moc_reentry_during_teardown` which deterministically
|
|
||||||
reproduces the teardown race:
|
|
||||||
|
|
||||||
1. A `cached_client()` acm (no params, like
|
|
||||||
`kraken.api.get_client()`) signals via `in_aexit` event
|
|
||||||
when its `__aexit__` starts, then sleeps to hold the
|
|
||||||
window open
|
|
||||||
2. `use_and_exit` task enters+exits the cached resource
|
|
||||||
3. `reenter_during_teardown` waits on `in_aexit` then
|
|
||||||
re-enters — landing exactly in the window where
|
|
||||||
`values` is popped but `resources` still exists
|
|
||||||
4. The old `assert not resources.get(ctx_key)` fires
|
|
||||||
5. `@pytest.mark.xfail(raises=AssertionError)` catches it
|
|
||||||
|
|
||||||
Test confirmed: `XFAIL` with the exact traceback at
|
|
||||||
`_mngrs.py:327`.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
- `tests/test_resource_cache.py` — add
|
|
||||||
`test_moc_reentry_during_teardown` xfail test fn
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
None — committed as generated (pending user review).
|
|
||||||
|
|
@ -1,52 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
session: (ad-hoc, not tracked via conf.toml)
|
|
||||||
timestamp: 2026-04-06T22:26:48Z
|
|
||||||
git_ref: f086222
|
|
||||||
scope: code
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260406T222648Z_f086222_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
User applied the per-key locking patch (`f086222d`),
|
|
||||||
removed the `@pytest.mark.xfail` from
|
|
||||||
`test_moc_reentry_during_teardown`, and found the
|
|
||||||
test STILL fails — now with `RuntimeError` instead
|
|
||||||
of `AssertionError`. Asked to diagnose why the fix
|
|
||||||
doesn't actually resolve this race.
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Traced the exact scheduling sequence across three
|
|
||||||
concurrent tasks to show that the per-`ctx_key`
|
|
||||||
locking change doesn't close the fundamental gap:
|
|
||||||
|
|
||||||
- `maybe_open_context`'s finally pops the lock and
|
|
||||||
sets `no_more_users` in the **exiting caller's
|
|
||||||
task**
|
|
||||||
- But the actual resource cleanup (`values.pop`,
|
|
||||||
acm `__aexit__`, `resources.pop`) runs in the
|
|
||||||
**`run_ctx` task** inside `service_tn`
|
|
||||||
- A re-entering task creates a fresh lock (old one
|
|
||||||
popped), acquires immediately, and hits stale
|
|
||||||
`resources[ctx_key]` while `run_ctx` is still in
|
|
||||||
`__aexit__`
|
|
||||||
|
|
||||||
Identified three possible fix strategies:
|
|
||||||
1. Let `run_ctx` own lock release/pop after
|
|
||||||
`resources.pop()`
|
|
||||||
2. Add a "teardown complete" event that new callers
|
|
||||||
wait on
|
|
||||||
3. Move `resources.pop()` into `run_ctx`'s inner
|
|
||||||
finally (before acm `__aexit__`)
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
No code changes — analysis only.
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
N/A — diagnostic output, no code generated.
|
|
||||||
|
|
@ -1,32 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
timestamp: 2026-04-06T22:26:48Z
|
|
||||||
git_ref: f086222
|
|
||||||
diff_cmd: git diff f086222~1..f086222
|
|
||||||
---
|
|
||||||
|
|
||||||
Here's the problem. The per-key locking doesn't fix this race because the **lock lifetime is decoupled from `run_ctx`'s lifetime**. Trace through what happens:
|
|
||||||
|
|
||||||
**`maybe_open_context` finally** (lines 468-495):
|
|
||||||
1. `users[ctx_key] -= 1` → 0
|
|
||||||
2. `no_more_users.set()` → wakes `run_ctx` (but it doesn't run yet, just becomes ready)
|
|
||||||
3. `_Cache.locks.pop(ctx_key)` → **lock is gone**
|
|
||||||
4. `maybe_open_context` exits
|
|
||||||
|
|
||||||
**`run_ctx`** now gets scheduled (lines 237-241):
|
|
||||||
5. Inner finally: `values.pop(ctx_key)` → values gone
|
|
||||||
6. `mng.__aexit__()` runs → `in_aexit.set()`, `await trio.sleep(10)` → **CHECKPOINT**
|
|
||||||
|
|
||||||
**`reenter_during_teardown`** wakes:
|
|
||||||
7. Enters `maybe_open_context` → no lock exists for `ctx_key` → creates **fresh** lock, acquires immediately
|
|
||||||
8. `values[ctx_key]` → `KeyError`
|
|
||||||
9. `resources.get(ctx_key)` → **still exists!** (step 6 hasn't reached `resources.pop()` in `run_ctx`'s outer finally yet)
|
|
||||||
10. `raise RuntimeError('Caching resources ALREADY exist?!')`
|
|
||||||
|
|
||||||
The core issue: `no_more_users.set()` (step 2) and `locks.pop()` (step 3) happen in the **exiting caller's task**, but the actual resource cleanup (`values.pop`, acm `__aexit__`, `resources.pop`) happens in the **`run_ctx` task** inside `service_tn`. There's no synchronization between "lock removed, new callers welcome" and "run_ctx has actually finished tearing down". The per-`ctx_key` lock change just made the lock more granular — it didn't close this gap.
|
|
||||||
|
|
||||||
The fix needs to ensure a new caller can't acquire a lock for `ctx_key` until `run_ctx` has fully exited (including `resources.pop()`). Either:
|
|
||||||
- **don't pop the lock in `maybe_open_context`** — let `run_ctx` release/pop it after `resources.pop()`
|
|
||||||
- **wait on a "teardown complete" event** in the cache-miss path when stale `resources` exist
|
|
||||||
- **pop `resources` before the acm `__aexit__`** (move `resources.pop()` into the inner finally alongside `values.pop()`)
|
|
||||||
|
|
@ -1,74 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
session: (ad-hoc, not tracked via conf.toml)
|
|
||||||
timestamp: 2026-04-13T19:21:16Z
|
|
||||||
git_ref: f851f28
|
|
||||||
scope: code
|
|
||||||
substantive: true
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
User provided a detailed implementation plan for
|
|
||||||
a new `test_tpt_bind_addrs.py` test suite covering
|
|
||||||
`open_root_actor(tpt_bind_addrs=...)` — the three
|
|
||||||
runtime code paths in `_root.py:385-450`:
|
|
||||||
|
|
||||||
1. Non-registrar, no explicit bind -> random addrs
|
|
||||||
2. Registrar, no explicit bind -> registry_addrs
|
|
||||||
3. Explicit bind given -> `wrap_address()` + merge
|
|
||||||
|
|
||||||
Plan specified 6 test functions (~10 parametrized
|
|
||||||
variants), predicted a type-mixing bug at line 446,
|
|
||||||
and asked for an in-flight fix if confirmed.
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Created `tests/discovery/test_tpt_bind_addrs.py`
|
|
||||||
with 9 collected test variants across 6 functions:
|
|
||||||
|
|
||||||
- `test_registrar_root_tpt_bind_addrs` (3 variants:
|
|
||||||
`bind-eq-reg`, `bind-subset-reg`,
|
|
||||||
`bind-disjoint-reg`)
|
|
||||||
- `test_non_registrar_root_tpt_bind_addrs`
|
|
||||||
(2 variants: `bind-same-bindspace`,
|
|
||||||
`bind-disjoint`)
|
|
||||||
- `test_non_registrar_default_random_bind`
|
|
||||||
(baseline, no explicit bind)
|
|
||||||
- `test_tpt_bind_addrs_as_maddr_str`
|
|
||||||
(multiaddr string input)
|
|
||||||
- `test_registrar_merge_binds_union`
|
|
||||||
(registrar + disjoint bind -> union)
|
|
||||||
- `test_open_nursery_forwards_tpt_bind_addrs`
|
|
||||||
(`open_nursery(**kwargs)` forwarding)
|
|
||||||
|
|
||||||
Confirmed and fixed the predicted bug at
|
|
||||||
`_root.py:446`: the registrar merge path mixed
|
|
||||||
`Address` objects (`tpt_bind_addrs`) with raw tuples
|
|
||||||
(`uw_reg_addrs`) inside `set()`, preventing
|
|
||||||
deduplication and causing double-bind `OSError`.
|
|
||||||
|
|
||||||
Fix: wrap `uw_reg_addrs` before the set union:
|
|
||||||
```python
|
|
||||||
# before (broken)
|
|
||||||
tpt_bind_addrs = list(set(
|
|
||||||
tpt_bind_addrs + uw_reg_addrs
|
|
||||||
))
|
|
||||||
# after (fixed)
|
|
||||||
tpt_bind_addrs = list(set(
|
|
||||||
tpt_bind_addrs
|
|
||||||
+ [wrap_address(a) for a in uw_reg_addrs]
|
|
||||||
))
|
|
||||||
```
|
|
||||||
|
|
||||||
All 9 tests pass after the fix.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
- `tests/discovery/test_tpt_bind_addrs.py` (new)
|
|
||||||
- `tractor/_root.py:446` (bug fix, 1 line)
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
N/A — pending review.
|
|
||||||
|
|
@ -1,50 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
session: 76154e65-d8e1-4b5f-9275-0ea45ba7e98a
|
|
||||||
timestamp: 2026-04-13T20:50:48Z
|
|
||||||
git_ref: 269d939c
|
|
||||||
scope: code
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260413T205048Z_269d939c_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
Implement a `parse_endpoints()` API in
|
|
||||||
`tractor.discovery._multiaddr` that lets downstream
|
|
||||||
projects declare per-actor transport bind addresses
|
|
||||||
as a service table of actor-name -> multiaddr
|
|
||||||
strings (loaded from config, e.g. TOML `[network]`
|
|
||||||
section). Add type aliases `EndpointsTable` and
|
|
||||||
`ParsedEndpoints`, plus 7 unit tests covering TCP,
|
|
||||||
UDS, tuples, mixed input, unsupported protocols, and
|
|
||||||
edge cases.
|
|
||||||
|
|
||||||
A detailed implementation plan was provided
|
|
||||||
specifying insertion points, type signatures, test
|
|
||||||
names, and verification commands.
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Add `parse_endpoints()` function with two supporting
|
|
||||||
type aliases to `_multiaddr.py`. The function
|
|
||||||
iterates a `dict[str, list[str|tuple]]` service
|
|
||||||
table and delegates each entry to the existing
|
|
||||||
`wrap_address()` helper, which already handles maddr
|
|
||||||
strings, raw tuples, and pre-wrapped `Address`
|
|
||||||
objects. Seven pure unit tests (no trio runtime)
|
|
||||||
validate all input variants and edge cases. All 22
|
|
||||||
multiaddr tests pass.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
- `tractor/discovery/_multiaddr.py` — add
|
|
||||||
`EndpointsTable`, `ParsedEndpoints` type aliases
|
|
||||||
and `parse_endpoints()` function after line 133
|
|
||||||
- `tests/discovery/test_multiaddr.py` — add
|
|
||||||
`parse_endpoints` import and 7 new test functions
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
None — committed as generated.
|
|
||||||
|
|
@ -1,32 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
timestamp: 2026-04-13T20:50:48Z
|
|
||||||
git_ref: 269d939c
|
|
||||||
diff_cmd: git diff 269d939c~1..269d939c
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation output
|
|
||||||
|
|
||||||
### `tractor/discovery/_multiaddr.py`
|
|
||||||
|
|
||||||
> `git diff 269d939c~1..269d939c -- tractor/discovery/_multiaddr.py`
|
|
||||||
|
|
||||||
Type aliases `EndpointsTable` and `ParsedEndpoints`
|
|
||||||
plus `parse_endpoints()` helper that wraps each
|
|
||||||
entry via `wrap_address()`.
|
|
||||||
|
|
||||||
### `tests/discovery/test_multiaddr.py`
|
|
||||||
|
|
||||||
> `git diff 269d939c~1..269d939c -- tests/discovery/test_multiaddr.py`
|
|
||||||
|
|
||||||
Add import of `parse_endpoints` to existing import
|
|
||||||
block, then 7 new test functions.
|
|
||||||
|
|
||||||
### Test run output
|
|
||||||
|
|
||||||
```
|
|
||||||
22 passed, 1 warning in 0.05s
|
|
||||||
```
|
|
||||||
|
|
||||||
All 22 tests pass (15 existing + 7 new).
|
|
||||||
|
|
@ -1,38 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
session: multiaddr-support-rename-prefer
|
|
||||||
timestamp: 2026-04-14T16:33:00Z
|
|
||||||
git_ref: befedc49
|
|
||||||
scope: code
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260414T163300Z_befedc49_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
Create a helper function that determines the best transport given
|
|
||||||
actor locality (distributed vs same host). Use PID/hostname
|
|
||||||
comparison for locality detection, apply at registry addr selection
|
|
||||||
only (not spawn-time).
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
New `prefer_addr()` + `_is_local_addr()` helpers
|
|
||||||
in `_api.py` using `socket.getaddrinfo()` and
|
|
||||||
`ipaddress` for PID/hostname locality detection.
|
|
||||||
Preference: UDS > local TCP > remote TCP.
|
|
||||||
Integrated into `query_actor()` and
|
|
||||||
`wait_for_actor()`. Also changed
|
|
||||||
`Registrar.find_actor()` to return full addr list
|
|
||||||
so callers can apply preference.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
- `tractor/discovery/_discovery.py` → `_api.py`
|
|
||||||
— renamed + added `prefer_addr()`,
|
|
||||||
`_is_local_addr()`; updated `query_actor()` and
|
|
||||||
`wait_for_actor()` call sites
|
|
||||||
- `tractor/discovery/_registry.py`
|
|
||||||
— `Registrar.find_actor()` returns
|
|
||||||
`list[UnwrappedAddress]|None`
|
|
||||||
|
|
@ -1,62 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-6
|
|
||||||
service: claude
|
|
||||||
timestamp: 2026-04-14T16:33:00Z
|
|
||||||
git_ref: befedc49
|
|
||||||
diff_cmd: git diff befedc49~1..befedc49
|
|
||||||
---
|
|
||||||
|
|
||||||
### `tractor/discovery/_api.py`
|
|
||||||
|
|
||||||
> `git diff befedc49~1..befedc49 -- tractor/discovery/_api.py`
|
|
||||||
|
|
||||||
Add `_is_local_addr()` and `prefer_addr()` transport
|
|
||||||
preference helpers.
|
|
||||||
|
|
||||||
#### `_is_local_addr(addr: Address) -> bool`
|
|
||||||
|
|
||||||
Determines whether an `Address` is reachable on the
|
|
||||||
local host:
|
|
||||||
|
|
||||||
- `UDSAddress`: always returns `True`
|
|
||||||
(filesystem-bound, inherently local)
|
|
||||||
- `TCPAddress`: checks if `._host` is a loopback IP
|
|
||||||
via `ipaddress.ip_address().is_loopback`, then
|
|
||||||
falls back to comparing against the machine's own
|
|
||||||
interface IPs via
|
|
||||||
`socket.getaddrinfo(socket.gethostname(), None)`
|
|
||||||
|
|
||||||
#### `prefer_addr(addrs: list[UnwrappedAddress]) -> UnwrappedAddress`
|
|
||||||
|
|
||||||
Selects the "best" transport address from a
|
|
||||||
multihomed actor's address list. Wraps each
|
|
||||||
candidate via `wrap_address()` to get typed
|
|
||||||
`Address` objects, then classifies into three tiers:
|
|
||||||
|
|
||||||
1. **UDS** (same-host guaranteed, lowest overhead)
|
|
||||||
2. **TCP loopback / same-host IP** (local network)
|
|
||||||
3. **TCP remote** (only option for distributed)
|
|
||||||
|
|
||||||
Within each tier, the last-registered (latest) entry
|
|
||||||
is preferred. Falls back to `addrs[-1]` if no
|
|
||||||
heuristic matches.
|
|
||||||
|
|
||||||
### `tractor/discovery/_registry.py`
|
|
||||||
|
|
||||||
> `git diff befedc49~1..befedc49 -- tractor/discovery/_registry.py`
|
|
||||||
|
|
||||||
`Registrar.find_actor()` return type broadened from
|
|
||||||
single addr to `list[UnwrappedAddress]|None` — full
|
|
||||||
addr list lets callers apply transport preference.
|
|
||||||
|
|
||||||
#### Integration
|
|
||||||
|
|
||||||
`query_actor()` and `wait_for_actor()` now call
|
|
||||||
`prefer_addr(addrs)` instead of `addrs[-1]`.
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
All discovery tests pass (13/13 non-daemon).
|
|
||||||
`test_local.py` and `test_multi_program.py` also
|
|
||||||
pass (daemon fixture teardown failures are
|
|
||||||
pre-existing and unrelated).
|
|
||||||
|
|
@ -1,101 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-7[1m]
|
|
||||||
service: claude
|
|
||||||
session: subints-spawner-design-kickoff
|
|
||||||
timestamp: 2026-04-17T03:49:18Z
|
|
||||||
git_ref: 9703210
|
|
||||||
scope: docs
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260417T034918Z_9703210_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
Drive the "first big boi, from GH issue" task seeded by
|
|
||||||
`ai/prompt-io/prompts/subints_spawner.md`: design, plan
|
|
||||||
and implement sub-interpreter (subint) spawn-backend
|
|
||||||
support per issue #379, including (1) modularizing
|
|
||||||
`tractor.spawn._spawn` into per-backend submods, (2) a new
|
|
||||||
`._subint` backend, and (3) harness parametrization via the
|
|
||||||
existing `--spawn-backend` / `start_method` pytest fixture
|
|
||||||
in `tractor._testing.pytest`.
|
|
||||||
|
|
||||||
Follow-up clarifications from the user (this turn):
|
|
||||||
1. Pin `<3.15` on this dev branch and feature-gate subint
|
|
||||||
tests — chose option (a).
|
|
||||||
2. Split Phase A (modularization) into its own PR first.
|
|
||||||
3. Defer the `fork()`-via-subint hack to a follow-up.
|
|
||||||
4. Harness flag is `pytest --spawn-backend <key>` CLI →
|
|
||||||
`start_method` fixture (prompt file updated to match).
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Produced a three-phase plan and a concrete Phase A (pure
|
|
||||||
modularization) file-split plan for user review; no code
|
|
||||||
written yet — the green-light to start Phase A was given
|
|
||||||
in this same turn conditional on logging this prompt-io
|
|
||||||
entry first.
|
|
||||||
|
|
||||||
Phases:
|
|
||||||
- **A — modularize** `tractor/spawn/_spawn.py` (847 LOC):
|
|
||||||
keep generic machinery in `_spawn.py`, extract
|
|
||||||
`trio_proc` → `spawn/_trio.py`, `mp_proc` →
|
|
||||||
`spawn/_mp.py`. No pin bump.
|
|
||||||
- **B — `_subint` backend**: bump `pyproject.toml`
|
|
||||||
`requires-python` upper to `<3.15`; add `'subint'` to
|
|
||||||
`SpawnMethodKey`; reuse existing UDS transport; shm
|
|
||||||
escape-hatch deferred.
|
|
||||||
- **C — harness**: drive the valid-backend tuple in
|
|
||||||
`tractor/_testing/pytest.py:345-349` from
|
|
||||||
`typing.get_args(SpawnMethodKey)`; skip subint tests on
|
|
||||||
Python < 3.14.
|
|
||||||
|
|
||||||
Key findings surfaced to the user:
|
|
||||||
- `pyproject.toml:12` currently pins `<3.14`; PEP 734
|
|
||||||
`concurrent.interpreters` only ships in 3.14 — the
|
|
||||||
load-bearing constraint.
|
|
||||||
- `_testing/pytest.py:345-349` hardcodes valid backends
|
|
||||||
as a string tuple (`'mp_spawn'`, `'mp_forkserver'`,
|
|
||||||
`'trio'`) — should be `get_args(SpawnMethodKey)`.
|
|
||||||
- `_testing/pytest.py:228` already imports
|
|
||||||
`try_set_start_method` from `tractor.spawn._spawn` —
|
|
||||||
keeping the `_spawn.py` path as the "core" module
|
|
||||||
avoids breaking external refs during Phase A.
|
|
||||||
|
|
||||||
Phase A commit breakdown (each green via `/run-tests`):
|
|
||||||
1. Drive `_testing/pytest.py` backend-tuple from
|
|
||||||
`get_args(SpawnMethodKey)` — prep.
|
|
||||||
2. Extract `trio_proc` → `spawn/_trio.py`; `_methods`
|
|
||||||
registry imports it.
|
|
||||||
3. Extract `mp_proc` → `spawn/_mp.py`; `_methods` imports
|
|
||||||
it.
|
|
||||||
4. Tidy `spawn/__init__.py` docstring for the new layout.
|
|
||||||
|
|
||||||
Workflow: Phase A runs in a new worktree via `/open-wkt
|
|
||||||
spawn_modularize` (branch `wkt/spawn_modularize`) and
|
|
||||||
lands as its own PR before Phase B/C returns to the
|
|
||||||
current `claude_bun_opus4.7_subint_spawner_backend`
|
|
||||||
branch.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
None this turn — design/planning only. The user updated
|
|
||||||
`ai/prompt-io/prompts/subints_spawner.md` themselves to
|
|
||||||
complete the truncated sentence on line 32 (harness-flag
|
|
||||||
reference).
|
|
||||||
|
|
||||||
Upcoming (Phase A, next turn):
|
|
||||||
- `tractor/spawn/_spawn.py` — trimmed to "core"
|
|
||||||
- `tractor/spawn/_trio.py` — new; holds `trio_proc()`
|
|
||||||
- `tractor/spawn/_mp.py` — new; holds `mp_proc()`
|
|
||||||
- `tractor/spawn/__init__.py` — docstring refresh
|
|
||||||
- `tractor/_testing/pytest.py` — backend tuple from
|
|
||||||
`get_args(SpawnMethodKey)`
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
None — committed as generated. User completed the
|
|
||||||
truncated sentence on line 32 of
|
|
||||||
`ai/prompt-io/prompts/subints_spawner.md` themselves
|
|
||||||
*before* this prompt-io log was written; that edit was
|
|
||||||
the user's authoring, not an edit to AI output.
|
|
||||||
|
|
@ -1,117 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-7[1m]
|
|
||||||
service: claude
|
|
||||||
timestamp: 2026-04-17T03:49:18Z
|
|
||||||
git_ref: 9703210
|
|
||||||
diff_cmd: git diff HEAD~1..HEAD
|
|
||||||
---
|
|
||||||
|
|
||||||
No code was produced in this turn — this is a design/planning
|
|
||||||
exchange for subint spawner support per issue #379. Captured
|
|
||||||
verbatim below are the two substantive AI responses, which
|
|
||||||
together fix the phased plan (A: modularize, B: subint, C:
|
|
||||||
harness) and the concrete Phase A file split before any
|
|
||||||
implementation begins.
|
|
||||||
|
|
||||||
## Turn 1 — initial triage + gating questions
|
|
||||||
|
|
||||||
> Read: issue #379, current `spawn/_spawn.py` (847 lines),
|
|
||||||
> and pyproject pins.
|
|
||||||
>
|
|
||||||
> ## Task summary
|
|
||||||
> Design/plan/implement subint backend per #379, plus
|
|
||||||
> modularize `spawn/_spawn.py` into per-backend submods, plus
|
|
||||||
> harness flag for test parametrization.
|
|
||||||
>
|
|
||||||
> ## Key finding
|
|
||||||
> `pyproject.toml:12` pins `requires-python = ">=3.12, <3.14"`
|
|
||||||
> — but `concurrent.interpreters` / PEP 734 **only lands in
|
|
||||||
> 3.14**. This is the load-bearing design question before
|
|
||||||
> coding.
|
|
||||||
>
|
|
||||||
> ## Proposed phasing (incremental commits)
|
|
||||||
> - **Phase A — modularize** `_spawn.py`: extract `trio_proc`
|
|
||||||
> → `spawn/_trio.py`, `mp_proc` → `spawn/_mp.py`, keep
|
|
||||||
> generic helpers (`hard_kill`/`soft_kill`/`new_proc`
|
|
||||||
> dispatcher/`SpawnMethodKey`) in a slim core. Tests green
|
|
||||||
> at each step.
|
|
||||||
> - **Phase B — `_subint` backend**: extend `SpawnMethodKey`
|
|
||||||
> with `'subint'`; reuse UDS transport by default; shm is
|
|
||||||
> escape-hatch follow-up.
|
|
||||||
> - **Phase C — harness**: add backend-parametrize flag, skip
|
|
||||||
> subint when Python < 3.14.
|
|
||||||
>
|
|
||||||
> ## Gating questions before I start
|
|
||||||
> 1. **Python pin**: bump `<3.15` and feature-gate subint
|
|
||||||
> tests (a), require 3.14 outright (b), or backport shim
|
|
||||||
> (c)? I'd recommend (a).
|
|
||||||
> 2. **PR strategy**: one branch, commits per phase (current
|
|
||||||
> `claude_bun_opus4.7_subint_spawner_backend`)? Or split
|
|
||||||
> Phase A as its own PR first?
|
|
||||||
> 3. **`fork()`-via-subint hack** from the issue's "Our own
|
|
||||||
> thoughts" section — v1 or follow-up? I'd defer to
|
|
||||||
> follow-up.
|
|
||||||
> 4. **Prompt file line 32 is truncated** — what's the
|
|
||||||
> existing harness flag name you had in mind ("using the
|
|
||||||
> existing ___")?
|
|
||||||
>
|
|
||||||
> Which of these are already decided vs. open?
|
|
||||||
|
|
||||||
## User answers (for the raw record)
|
|
||||||
|
|
||||||
1. bump `<3.15`, feature-gate — option (a).
|
|
||||||
2. split Phase A as its own PR first.
|
|
||||||
3. defer `fork()`-via-subint hack.
|
|
||||||
4. harness flag is `--spawn-backend` CLI → `start_method`
|
|
||||||
fixture, per `tractor._testing._pytest`. Prompt file
|
|
||||||
updated accordingly.
|
|
||||||
|
|
||||||
## Turn 2 — concrete Phase A plan
|
|
||||||
|
|
||||||
> Good — prompt file now ends cleanly; harness flag
|
|
||||||
> confirmed as `--spawn-backend` CLI → `start_method`
|
|
||||||
> fixture (via `_testing/pytest.py:193,355`). Also noted
|
|
||||||
> `pytest.py:345-349` hardcodes valid backends as a string
|
|
||||||
> tuple — `typing.get_args(SpawnMethodKey)` should drive
|
|
||||||
> that list.
|
|
||||||
>
|
|
||||||
> ## Phase A — concrete plan (separate PR)
|
|
||||||
>
|
|
||||||
> **File split** in `tractor/spawn/`:
|
|
||||||
> - `_spawn.py` stays as "core": keeps `SpawnMethodKey`,
|
|
||||||
> `_spawn_method`, `proc_waiter`, `try_set_start_method`,
|
|
||||||
> `exhaust_portal`, `cancel_on_completion`, `hard_kill`,
|
|
||||||
> `soft_kill`, `new_proc` dispatcher, `_methods`
|
|
||||||
> registry. Kept at this path so `_testing/pytest.py:228`
|
|
||||||
> and other external imports don't break.
|
|
||||||
> - `_trio.py` ← `trio_proc()` (lines 448–671, ~224 lines)
|
|
||||||
> - `_mp.py` ← `mp_proc()` (lines 672–842, ~171 lines)
|
|
||||||
> - `_methods` dict imports the two backend fns from the
|
|
||||||
> new modules.
|
|
||||||
>
|
|
||||||
> **Commits** (each green via `/run-tests`):
|
|
||||||
> 1. Drive `_testing/pytest.py` backend-tuple from
|
|
||||||
> `get_args(SpawnMethodKey)` — tiny prep.
|
|
||||||
> 2. Extract `trio_proc` → `spawn/_trio.py`; `_methods`
|
|
||||||
> imports it.
|
|
||||||
> 3. Extract `mp_proc` → `spawn/_mp.py`; `_methods`
|
|
||||||
> imports it.
|
|
||||||
> 4. Tidy `spawn/__init__.py` docstring to describe new
|
|
||||||
> layout.
|
|
||||||
>
|
|
||||||
> No pyproject pin bump in Phase A (that's Phase B).
|
|
||||||
>
|
|
||||||
> **Workflow**: I'd run this in a new worktree via
|
|
||||||
> `/open-wkt spawn_modularize` (branch
|
|
||||||
> `wkt/spawn_modularize`), then open a PR before returning
|
|
||||||
> to the current `claude_bun_opus4.7_subint_spawner_backend`
|
|
||||||
> branch for Phase B/C.
|
|
||||||
>
|
|
||||||
> Green-light to `/open-wkt` and start Phase A?
|
|
||||||
|
|
||||||
## User final response
|
|
||||||
|
|
||||||
> all this first then yes, green light.
|
|
||||||
|
|
||||||
— log this `/prompt-io` entry first, then proceed with
|
|
||||||
Phase A in a worktree.
|
|
||||||
|
|
@ -1,86 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-7[1m]
|
|
||||||
service: claude
|
|
||||||
session: subints-spawner-phase-a-impl
|
|
||||||
timestamp: 2026-04-17T03:58:05Z
|
|
||||||
git_ref: 61a73ba
|
|
||||||
scope: code
|
|
||||||
substantive: true
|
|
||||||
raw_file: 20260417T035800Z_61a73ba_prompt_io.raw.md
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
|
|
||||||
Follow-up from the design session logged in
|
|
||||||
`20260417T034918Z_9703210_prompt_io.md`: the user
|
|
||||||
greenlit Phase A after `/prompt-io` tracking, so
|
|
||||||
execute the Phase A "modularize `spawn._spawn`" plan
|
|
||||||
end-to-end in a new worktree — four logical steps
|
|
||||||
(pytest harness prep, `trio_proc` extraction,
|
|
||||||
`mp_proc` extraction, docstring refresh), each green
|
|
||||||
via `/run-tests`.
|
|
||||||
|
|
||||||
User directives during execution:
|
|
||||||
- **Option 3** for the test cadence ("continue A.2–A.4
|
|
||||||
first, then run the full suite once at the end of
|
|
||||||
Phase A").
|
|
||||||
- **One commit** for the whole phase ("can't we just
|
|
||||||
commit the whole patch in one commit?") instead of
|
|
||||||
the 3/4-commit split I initially proposed.
|
|
||||||
- **Don't pre-draft** commit messages — wait for the
|
|
||||||
user to invoke `/commit-msg` (captured as feedback
|
|
||||||
memory `feedback_no_auto_draft_commit_msgs.md`).
|
|
||||||
|
|
||||||
## Response summary
|
|
||||||
|
|
||||||
Produced the cohesive Phase A modularization patch,
|
|
||||||
landed as commit `61a73bae` (subject: `Mv
|
|
||||||
trio_proc`/`mp_proc` to per-backend submods`). Five
|
|
||||||
files changed, +565 / -418 lines.
|
|
||||||
|
|
||||||
Key pieces of the patch (generated by claude,
|
|
||||||
reviewed by the human before commit):
|
|
||||||
- `tractor/spawn/_trio.py` — **new**; receives
|
|
||||||
`trio_proc()` verbatim from `_spawn.py`; imports
|
|
||||||
cross-backend helpers back from `._spawn`.
|
|
||||||
- `tractor/spawn/_mp.py` — **new**; receives
|
|
||||||
`mp_proc()` verbatim; uses `from . import _spawn`
|
|
||||||
for late-binding access to the mutable `_ctx` /
|
|
||||||
`_spawn_method` globals (design decision made
|
|
||||||
during impl, not the original plan).
|
|
||||||
- `tractor/spawn/_spawn.py` — shrunk 847 → 448 LOC;
|
|
||||||
import pruning; bottom-of-module late imports for
|
|
||||||
`trio_proc` / `mp_proc` with a one-line comment
|
|
||||||
explaining the circular-dep reason.
|
|
||||||
- `tractor/spawn/__init__.py` — docstring refresh
|
|
||||||
describing the new layout.
|
|
||||||
- `tractor/_testing/pytest.py` — the valid-backend
|
|
||||||
set now comes from `typing.get_args(SpawnMethodKey)`
|
|
||||||
so future additions (`'subint'`) don't need harness
|
|
||||||
edits.
|
|
||||||
|
|
||||||
## Files changed
|
|
||||||
|
|
||||||
See `git diff 61a73ba~1..61a73ba --stat`:
|
|
||||||
|
|
||||||
```
|
|
||||||
tractor/_testing/pytest.py | 12 +-
|
|
||||||
tractor/spawn/__init__.py | 31 +++-
|
|
||||||
tractor/spawn/_mp.py | 235 ++++++++++++++++++++++++
|
|
||||||
tractor/spawn/_spawn.py | 413 +-------------------------------
|
|
||||||
tractor/spawn/_trio.py | 292 ++++++++++++++++++++++++++++
|
|
||||||
5 files changed, 565 insertions(+), 418 deletions(-)
|
|
||||||
```
|
|
||||||
|
|
||||||
Validation:
|
|
||||||
- import probe + `_methods` wiring check — OK
|
|
||||||
- spawn-relevant test subset — 37 passed, 1 skipped
|
|
||||||
- full suite — 350 passed, 14 skipped, 7 xfailed, 1
|
|
||||||
xpassed
|
|
||||||
|
|
||||||
## Human edits
|
|
||||||
|
|
||||||
None — committed as generated by claude (no diff
|
|
||||||
between `.claude/git_commit_msg_LATEST.md` and the
|
|
||||||
committed body, as far as the assistant could
|
|
||||||
observe).
|
|
||||||
|
|
@ -1,138 +0,0 @@
|
||||||
---
|
|
||||||
model: claude-opus-4-7[1m]
|
|
||||||
service: claude
|
|
||||||
timestamp: 2026-04-17T03:58:05Z
|
|
||||||
git_ref: 61a73ba
|
|
||||||
diff_cmd: git diff 61a73ba~1..61a73ba
|
|
||||||
---
|
|
||||||
|
|
||||||
Code generated in this turn was committed verbatim as
|
|
||||||
`61a73bae` ("Mv `trio_proc`/`mp_proc` to per-backend
|
|
||||||
submods"). Per diff-ref mode, per-file code is captured
|
|
||||||
via the pointers below, each followed by a prose
|
|
||||||
summary of what the AI generated. Non-code output
|
|
||||||
(sanity-check results, design rationale) is included
|
|
||||||
verbatim.
|
|
||||||
|
|
||||||
## Per-file generated content
|
|
||||||
|
|
||||||
### `tractor/spawn/_trio.py` (new, 292 lines)
|
|
||||||
|
|
||||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/_trio.py`
|
|
||||||
|
|
||||||
Pure lift-and-shift of `trio_proc()` out of
|
|
||||||
`tractor/spawn/_spawn.py` (previously lines 448–670).
|
|
||||||
Added AGPL header + module docstring describing the
|
|
||||||
backend; imports include local `from ._spawn import
|
|
||||||
cancel_on_completion, hard_kill, soft_kill` which
|
|
||||||
creates the bottom-of-module late-import pattern in
|
|
||||||
the core file to avoid a cycle. All call sites,
|
|
||||||
log-format strings, and body logic are byte-identical
|
|
||||||
to the originals — no semantic change.
|
|
||||||
|
|
||||||
### `tractor/spawn/_mp.py` (new, 235 lines)
|
|
||||||
|
|
||||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/_mp.py`
|
|
||||||
|
|
||||||
Pure lift-and-shift of `mp_proc()` out of
|
|
||||||
`tractor/spawn/_spawn.py` (previously lines 672–842).
|
|
||||||
Same AGPL header convention. Key difference from
|
|
||||||
`_trio.py`: uses `from . import _spawn` (module
|
|
||||||
import, not from-import) for `_ctx` and
|
|
||||||
`_spawn_method` references — these are mutated at
|
|
||||||
runtime by `try_set_start_method()`, so late binding
|
|
||||||
via `_spawn._ctx` / `_spawn._spawn_method` is required
|
|
||||||
for correctness. Also imports `cancel_on_completion`,
|
|
||||||
`soft_kill`, `proc_waiter` from `._spawn`.
|
|
||||||
|
|
||||||
### `tractor/spawn/_spawn.py` (modified, 847 → 448 LOC)
|
|
||||||
|
|
||||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/_spawn.py`
|
|
||||||
|
|
||||||
- removed `trio_proc()` body (moved to `_trio.py`)
|
|
||||||
- removed `mp_proc()` body (moved to `_mp.py`)
|
|
||||||
- pruned imports now unused in core: `sys`,
|
|
||||||
`is_root_process`, `current_actor`,
|
|
||||||
`is_main_process`, `_mp_main`, `ActorFailure`,
|
|
||||||
`pretty_struct`, `_pformat`
|
|
||||||
- added bottom-of-file late imports
|
|
||||||
`from ._trio import trio_proc` and
|
|
||||||
`from ._mp import mp_proc` with a one-line
|
|
||||||
comment explaining why (circular dep)
|
|
||||||
- `_methods` dict unchanged structurally; still binds
|
|
||||||
`'trio' → trio_proc`, `'mp_spawn' → mp_proc`,
|
|
||||||
`'mp_forkserver' → mp_proc`
|
|
||||||
|
|
||||||
### `tractor/spawn/__init__.py` (modified)
|
|
||||||
|
|
||||||
> `git diff 61a73ba~1..61a73ba -- tractor/spawn/__init__.py`
|
|
||||||
|
|
||||||
Docstring rewrite only — no code. New text describes
|
|
||||||
the per-backend `Layout` with `- ._spawn` (core),
|
|
||||||
`- ._trio` (default), `- ._mp` (two mp variants), and
|
|
||||||
reiterates the "no eager import" NOTE.
|
|
||||||
|
|
||||||
### `tractor/_testing/pytest.py` (modified)
|
|
||||||
|
|
||||||
> `git diff 61a73ba~1..61a73ba -- tractor/_testing/pytest.py`
|
|
||||||
|
|
||||||
Two-line logical change (plus a comment):
|
|
||||||
- `from typing import` grows `get_args`
|
|
||||||
- `pytest_generate_tests()` replaces the hardcoded
|
|
||||||
`('mp_spawn','mp_forkserver','trio')` tuple with
|
|
||||||
`get_args(SpawnMethodKey)` via lazy import
|
|
||||||
`from tractor.spawn._spawn import SpawnMethodKey`
|
|
||||||
inside the function body (matching the project's
|
|
||||||
existing lazy-import convention for the spawn
|
|
||||||
module)
|
|
||||||
|
|
||||||
## Non-code output (verbatim)
|
|
||||||
|
|
||||||
### Design rationale for the file split
|
|
||||||
|
|
||||||
From my Phase A plan turn (already captured in
|
|
||||||
`20260417T034918Z_9703210_prompt_io.md`):
|
|
||||||
- `_spawn.py` stays as the "core" rather than renamed
|
|
||||||
to `_core.py` so external imports
|
|
||||||
(`_testing/pytest.py:228` imports
|
|
||||||
`try_set_start_method` from `tractor.spawn._spawn`)
|
|
||||||
keep working without churn.
|
|
||||||
- Per-backend extraction chosen over alternatives
|
|
||||||
(e.g. splitting generic helpers further) because
|
|
||||||
the immediate motivation is hosting a 3rd
|
|
||||||
`_subint.py` sibling cleanly in Phase B.
|
|
||||||
|
|
||||||
### Sanity-check output (verbatim terminal excerpts)
|
|
||||||
|
|
||||||
Post-extraction import probe:
|
|
||||||
```
|
|
||||||
extraction OK
|
|
||||||
_methods: {'trio': 'tractor.spawn._trio.trio_proc',
|
|
||||||
'mp_spawn': 'tractor.spawn._mp.mp_proc',
|
|
||||||
'mp_forkserver': 'tractor.spawn._mp.mp_proc'}
|
|
||||||
```
|
|
||||||
|
|
||||||
Spawn-relevant test subset (`tests/test_local.py
|
|
||||||
test_rpc.py test_spawning.py test_multi_program.py
|
|
||||||
test_discovery.py`):
|
|
||||||
```
|
|
||||||
37 passed, 1 skipped, 14 warnings in 55.37s
|
|
||||||
```
|
|
||||||
|
|
||||||
Full suite:
|
|
||||||
```
|
|
||||||
350 passed, 14 skipped, 7 xfailed, 1 xpassed,
|
|
||||||
151 warnings in 437.73s (0:07:17)
|
|
||||||
```
|
|
||||||
|
|
||||||
No regressions vs. `main`. One transient `-x`
|
|
||||||
early-stop `ERROR` on
|
|
||||||
`test_close_channel_explicit_remote_registrar[trio-True]`
|
|
||||||
was flaky (passed solo, passed without `-x`), not
|
|
||||||
caused by this refactor.
|
|
||||||
|
|
||||||
### Commit message
|
|
||||||
|
|
||||||
Also AI-drafted (via `/commit-msg`) — the 40-line
|
|
||||||
message on commit `61a73bae` itself. Not reproduced
|
|
||||||
here; see `git log -1 61a73bae`.
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
# AI Prompt I/O Log — claude
|
|
||||||
|
|
||||||
This directory tracks prompt inputs and model
|
|
||||||
outputs for AI-assisted development using
|
|
||||||
`claude` (Claude Code).
|
|
||||||
|
|
||||||
## Policy
|
|
||||||
|
|
||||||
Prompt logging follows the
|
|
||||||
[NLNet generative AI policy][nlnet-ai].
|
|
||||||
All substantive AI contributions are logged
|
|
||||||
with:
|
|
||||||
- Model name and version
|
|
||||||
- Timestamps
|
|
||||||
- The prompts that produced the output
|
|
||||||
- Unedited model output (`.raw.md` files)
|
|
||||||
|
|
||||||
[nlnet-ai]: https://nlnet.nl/foundation/policies/generativeAI/
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
Entries are created by the `/prompt-io` skill
|
|
||||||
or automatically via `/commit-msg` integration.
|
|
||||||
|
|
||||||
Human contributors remain accountable for all
|
|
||||||
code decisions. AI-generated content is never
|
|
||||||
presented as human-authored work.
|
|
||||||
|
|
@ -1,76 +0,0 @@
|
||||||
ok now i want you to take a look at the most recent commit adding
|
|
||||||
a `tpt_bind_addrs` to `open_root_actor()` and extend the existing
|
|
||||||
tests/discovery/test_multiaddr* and friends to use this new param in
|
|
||||||
at least one suite with parametrizations over,
|
|
||||||
|
|
||||||
- `registry_addrs == tpt_bind_addrs`, as in both inputs are the same.
|
|
||||||
- `set(registry_addrs) >= set(tpt_bind_addrs)`, as in the registry
|
|
||||||
addrs include the bind set.
|
|
||||||
- `registry_addrs != tpt_bind_addrs`, where the reg set is disjoint from
|
|
||||||
the bind set in all possible combos you can imagine.
|
|
||||||
|
|
||||||
All of the ^above cases should further be parametrized over,
|
|
||||||
- the root being the registrar,
|
|
||||||
- a non-registrar root using our bg `daemon` fixture.
|
|
||||||
|
|
||||||
once we have a fairly thorough test suite and have flushed out all
|
|
||||||
bugs and edge cases we want to design a wrapping API which allows
|
|
||||||
declaring full tree's of actors tpt endpoints using multiaddrs such
|
|
||||||
that a `dict[str, list[str]]` of actor-name -> multiaddr can be used
|
|
||||||
to configure a tree of actors-as-services given such an input
|
|
||||||
"endpoints-table" can be matched with the number of appropriately
|
|
||||||
named subactore spawns in a `tractor` user-app.
|
|
||||||
|
|
||||||
Here is a small example from piker,
|
|
||||||
|
|
||||||
- in piker's root conf.toml we define a `[network]` section which can
|
|
||||||
define various actor-service-daemon names set to a maddr
|
|
||||||
(multiaddress str).
|
|
||||||
|
|
||||||
- each actor whether part of the `pikerd` tree (as a sub) or spawned
|
|
||||||
in other non-registrar rooted trees (such as `piker chart`) should
|
|
||||||
configurable in terms of its `tractor` tpt bind addresses via
|
|
||||||
a simple service lookup table,
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[network]
|
|
||||||
pikerd = [
|
|
||||||
'/ip4/127.0.0.1/tcp/6116', # std localhost daemon-actor tree
|
|
||||||
'/uds/run/user/1000/piker/pikerd@6116.sock', # same but serving UDS
|
|
||||||
]
|
|
||||||
chart = [
|
|
||||||
'/ip4/127.0.0.1/tcp/3333', # std localhost daemon-actor tree
|
|
||||||
'/uds/run/user/1000/piker/chart@3333.sock',
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
We should take whatever common API is needed to support this and
|
|
||||||
distill it into a
|
|
||||||
```python
|
|
||||||
tractor.discovery.parse_endpoints(
|
|
||||||
) -> dict[
|
|
||||||
str,
|
|
||||||
list[Address]
|
|
||||||
|dict[str, list[Address]]
|
|
||||||
# ^recursive case, see below
|
|
||||||
]:
|
|
||||||
```
|
|
||||||
|
|
||||||
style API which can,
|
|
||||||
|
|
||||||
- be re-used easily across dependent projects.
|
|
||||||
- correctly raise tpt-backend support errors when a maddr specifying
|
|
||||||
a unsupport proto is passed.
|
|
||||||
- be used to handle "tunnelled" maddrs per
|
|
||||||
https://github.com/multiformats/py-multiaddr/#tunneling such that
|
|
||||||
for any such tunneled maddr-`str`-entry we deliver a data-structure
|
|
||||||
which can easily be passed to nested `@acm`s which consecutively
|
|
||||||
setup nested net bindspaces for binding the endpoint addrs using
|
|
||||||
a combo of our `.ipc.*` machinery and, say for example something like
|
|
||||||
https://github.com/svinota/pyroute2, more precisely say for
|
|
||||||
managing tunnelled wireguard eps within network-namespaces,
|
|
||||||
* https://docs.pyroute2.org/
|
|
||||||
* https://docs.pyroute2.org/netns.html
|
|
||||||
|
|
||||||
remember to include use of all default `.claude/skills` throughout
|
|
||||||
this work!
|
|
||||||
|
|
@ -1,34 +0,0 @@
|
||||||
This is your first big boi, "from GH issue" design, plan and
|
|
||||||
implement task.
|
|
||||||
|
|
||||||
We need to try and add sub-interpreter (aka subint) support per the
|
|
||||||
issue,
|
|
||||||
|
|
||||||
https://github.com/goodboy/tractor/issues/379
|
|
||||||
|
|
||||||
Part of this work should include,
|
|
||||||
|
|
||||||
- modularizing and thus better organizing the `.spawn.*` subpkg by
|
|
||||||
breaking up various backends currently in `spawn._spawn` into
|
|
||||||
separate submods where it makes sense.
|
|
||||||
|
|
||||||
- add a new `._subint` backend which tries to keep as much of the
|
|
||||||
inter-process-isolation machinery in use as possible but with plans
|
|
||||||
to optimize for localhost only benefits as offered by python's
|
|
||||||
subints where possible.
|
|
||||||
|
|
||||||
* utilizing localhost-only tpts like UDS, shm-buffers for
|
|
||||||
performant IPC between subactors but also leveraging the benefits from
|
|
||||||
the traditional OS subprocs mem/storage-domain isolation, linux
|
|
||||||
namespaces where possible and as available/permitted by whatever
|
|
||||||
is happening under the hood with how cpython implements subints.
|
|
||||||
|
|
||||||
* default configuration should encourage state isolation as with
|
|
||||||
subprocs, but explicit public escape hatches to enable rigorously
|
|
||||||
managed shm channels for high performance apps.
|
|
||||||
|
|
||||||
- all tests should be (able to be) parameterized to use the new
|
|
||||||
`subints` backend and enabled by flag in the harness using the
|
|
||||||
existing `pytest --spawn-backend <spawn-backend>` support offered in
|
|
||||||
the `open_root_actor()` and `.testing._pytest` harness override
|
|
||||||
fixture.
|
|
||||||
|
|
@ -420,17 +420,20 @@ Check out our experimental system for `guest`_-mode controlled
|
||||||
|
|
||||||
|
|
||||||
async def aio_echo_server(
|
async def aio_echo_server(
|
||||||
chan: tractor.to_asyncio.LinkedTaskChannel,
|
to_trio: trio.MemorySendChannel,
|
||||||
|
from_trio: asyncio.Queue,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
# a first message must be sent **from** this ``asyncio``
|
# a first message must be sent **from** this ``asyncio``
|
||||||
# task or the ``trio`` side will never unblock from
|
# task or the ``trio`` side will never unblock from
|
||||||
# ``tractor.to_asyncio.open_channel_from():``
|
# ``tractor.to_asyncio.open_channel_from():``
|
||||||
chan.started_nowait('start')
|
to_trio.send_nowait('start')
|
||||||
|
|
||||||
|
# XXX: this uses an ``from_trio: asyncio.Queue`` currently but we
|
||||||
|
# should probably offer something better.
|
||||||
while True:
|
while True:
|
||||||
# echo the msg back
|
# echo the msg back
|
||||||
chan.send_nowait(await chan.get())
|
to_trio.send_nowait(await from_trio.get())
|
||||||
await asyncio.sleep(0)
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -442,7 +445,7 @@ Check out our experimental system for `guest`_-mode controlled
|
||||||
# message.
|
# message.
|
||||||
async with tractor.to_asyncio.open_channel_from(
|
async with tractor.to_asyncio.open_channel_from(
|
||||||
aio_echo_server,
|
aio_echo_server,
|
||||||
) as (chan, first):
|
) as (first, chan):
|
||||||
|
|
||||||
assert first == 'start'
|
assert first == 'start'
|
||||||
await ctx.started(first)
|
await ctx.started(first)
|
||||||
|
|
@ -501,10 +504,8 @@ Yes, we spawn a python process, run ``asyncio``, start ``trio`` on the
|
||||||
``asyncio`` loop, then send commands to the ``trio`` scheduled tasks to
|
``asyncio`` loop, then send commands to the ``trio`` scheduled tasks to
|
||||||
tell ``asyncio`` tasks what to do XD
|
tell ``asyncio`` tasks what to do XD
|
||||||
|
|
||||||
The ``asyncio``-side task receives a single
|
We need help refining the `asyncio`-side channel API to be more
|
||||||
``chan: LinkedTaskChannel`` handle providing a ``trio``-like
|
`trio`-like. Feel free to sling your opinion in `#273`_!
|
||||||
API: ``.started_nowait()``, ``.send_nowait()``, ``.get()``
|
|
||||||
and more. Feel free to sling your opinion in `#273`_!
|
|
||||||
|
|
||||||
|
|
||||||
.. _#273: https://github.com/goodboy/tractor/issues/273
|
.. _#273: https://github.com/goodboy/tractor/issues/273
|
||||||
|
|
@ -640,15 +641,13 @@ Help us push toward the future of distributed `Python`.
|
||||||
- Typed capability-based (dialog) protocols ( see `#196
|
- Typed capability-based (dialog) protocols ( see `#196
|
||||||
<https://github.com/goodboy/tractor/issues/196>`_ with draft work
|
<https://github.com/goodboy/tractor/issues/196>`_ with draft work
|
||||||
started in `#311 <https://github.com/goodboy/tractor/pull/311>`_)
|
started in `#311 <https://github.com/goodboy/tractor/pull/311>`_)
|
||||||
- **macOS is now officially supported** and tested in CI
|
- We **recently disabled CI-testing on windows** and need help getting
|
||||||
alongside Linux!
|
it running again! (see `#327
|
||||||
- We **recently disabled CI-testing on windows** and need
|
<https://github.com/goodboy/tractor/pull/327>`_). **We do have windows
|
||||||
help getting it running again! (see `#327
|
support** (and have for quite a while) but since no active hacker
|
||||||
<https://github.com/goodboy/tractor/pull/327>`_). **We do
|
exists in the user-base to help test on that OS, for now we're not
|
||||||
have windows support** (and have for quite a while) but
|
actively maintaining testing due to the added hassle and general
|
||||||
since no active hacker exists in the user-base to help
|
latency..
|
||||||
test on that OS, for now we're not actively maintaining
|
|
||||||
testing due to the added hassle and general latency..
|
|
||||||
|
|
||||||
|
|
||||||
Feel like saying hi?
|
Feel like saying hi?
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,6 @@ from tractor import (
|
||||||
MsgStream,
|
MsgStream,
|
||||||
_testing,
|
_testing,
|
||||||
trionics,
|
trionics,
|
||||||
TransportClosed,
|
|
||||||
)
|
)
|
||||||
import trio
|
import trio
|
||||||
import pytest
|
import pytest
|
||||||
|
|
@ -209,15 +208,11 @@ async def main(
|
||||||
# TODO: is this needed or no?
|
# TODO: is this needed or no?
|
||||||
raise
|
raise
|
||||||
|
|
||||||
except (
|
except trio.ClosedResourceError:
|
||||||
trio.ClosedResourceError,
|
|
||||||
TransportClosed,
|
|
||||||
) as _tpt_err:
|
|
||||||
# NOTE: don't send if we already broke the
|
# NOTE: don't send if we already broke the
|
||||||
# connection to avoid raising a closed-error
|
# connection to avoid raising a closed-error
|
||||||
# such that we drop through to the ctl-c
|
# such that we drop through to the ctl-c
|
||||||
# mashing by user.
|
# mashing by user.
|
||||||
with trio.CancelScope(shield=True):
|
|
||||||
await trio.sleep(0.01)
|
await trio.sleep(0.01)
|
||||||
|
|
||||||
# timeout: int = 1
|
# timeout: int = 1
|
||||||
|
|
@ -252,7 +247,6 @@ async def main(
|
||||||
await stream.send(i)
|
await stream.send(i)
|
||||||
pytest.fail('stream not closed?')
|
pytest.fail('stream not closed?')
|
||||||
except (
|
except (
|
||||||
TransportClosed,
|
|
||||||
trio.ClosedResourceError,
|
trio.ClosedResourceError,
|
||||||
trio.EndOfChannel,
|
trio.EndOfChannel,
|
||||||
) as send_err:
|
) as send_err:
|
||||||
|
|
|
||||||
|
|
@ -18,14 +18,15 @@ async def aio_sleep_forever():
|
||||||
|
|
||||||
|
|
||||||
async def bp_then_error(
|
async def bp_then_error(
|
||||||
chan: to_asyncio.LinkedTaskChannel,
|
to_trio: trio.MemorySendChannel,
|
||||||
|
from_trio: asyncio.Queue,
|
||||||
|
|
||||||
raise_after_bp: bool = True,
|
raise_after_bp: bool = True,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
# sync with `trio`-side (caller) task
|
# sync with `trio`-side (caller) task
|
||||||
chan.started_nowait('start')
|
to_trio.send_nowait('start')
|
||||||
|
|
||||||
# NOTE: what happens here inside the hook needs some refinement..
|
# NOTE: what happens here inside the hook needs some refinement..
|
||||||
# => seems like it's still `.debug._set_trace()` but
|
# => seems like it's still `.debug._set_trace()` but
|
||||||
|
|
@ -59,7 +60,7 @@ async def trio_ctx(
|
||||||
to_asyncio.open_channel_from(
|
to_asyncio.open_channel_from(
|
||||||
bp_then_error,
|
bp_then_error,
|
||||||
# raise_after_bp=not bp_before_started,
|
# raise_after_bp=not bp_before_started,
|
||||||
) as (chan, first),
|
) as (first, chan),
|
||||||
|
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
):
|
):
|
||||||
|
|
|
||||||
|
|
@ -20,7 +20,7 @@ async def sleep(
|
||||||
|
|
||||||
|
|
||||||
async def open_ctx(
|
async def open_ctx(
|
||||||
n: tractor.runtime._supervise.ActorNursery
|
n: tractor._supervise.ActorNursery
|
||||||
):
|
):
|
||||||
|
|
||||||
# spawn both actors
|
# spawn both actors
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,6 @@ Verify we can dump a `stackscope` tree on a hang.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
import os
|
import os
|
||||||
import platform
|
|
||||||
import signal
|
import signal
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
|
|
@ -32,26 +31,13 @@ async def main(
|
||||||
from_test: bool = False,
|
from_test: bool = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
if platform.system() != 'Darwin':
|
|
||||||
tpt = 'uds'
|
|
||||||
else:
|
|
||||||
# XXX, precisely we can't use pytest's tmp-path generation
|
|
||||||
# for tests.. apparently because:
|
|
||||||
#
|
|
||||||
# > The OSError: AF_UNIX path too long in macOS Python occurs
|
|
||||||
# > because the path to the Unix domain socket exceeds the
|
|
||||||
# > operating system's maximum path length limit (around 104
|
|
||||||
#
|
|
||||||
# WHICH IS just, wtf hillarious XD
|
|
||||||
tpt = 'tcp'
|
|
||||||
|
|
||||||
async with (
|
async with (
|
||||||
tractor.open_nursery(
|
tractor.open_nursery(
|
||||||
debug_mode=True,
|
debug_mode=True,
|
||||||
enable_stack_on_sig=True,
|
enable_stack_on_sig=True,
|
||||||
# maybe_enable_greenback=False,
|
# maybe_enable_greenback=False,
|
||||||
loglevel='devx',
|
loglevel='devx',
|
||||||
enable_transports=[tpt],
|
enable_transports=['uds'],
|
||||||
) as an,
|
) as an,
|
||||||
):
|
):
|
||||||
ptl: tractor.Portal = await an.start_actor(
|
ptl: tractor.Portal = await an.start_actor(
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,3 @@
|
||||||
import platform
|
|
||||||
|
|
||||||
import tractor
|
import tractor
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
|
|
@ -36,22 +34,9 @@ async def just_bp(
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
|
|
||||||
if platform.system() != 'Darwin':
|
|
||||||
tpt = 'uds'
|
|
||||||
else:
|
|
||||||
# XXX, precisely we can't use pytest's tmp-path generation
|
|
||||||
# for tests.. apparently because:
|
|
||||||
#
|
|
||||||
# > The OSError: AF_UNIX path too long in macOS Python occurs
|
|
||||||
# > because the path to the Unix domain socket exceeds the
|
|
||||||
# > operating system's maximum path length limit (around 104
|
|
||||||
#
|
|
||||||
# WHICH IS just, wtf hillarious XD
|
|
||||||
tpt = 'tcp'
|
|
||||||
|
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
debug_mode=True,
|
debug_mode=True,
|
||||||
enable_transports=[tpt],
|
enable_transports=['uds'],
|
||||||
loglevel='devx',
|
loglevel='devx',
|
||||||
) as n:
|
) as n:
|
||||||
p = await n.start_actor(
|
p = await n.start_actor(
|
||||||
|
|
|
||||||
|
|
@ -90,7 +90,7 @@ async def main() -> list[int]:
|
||||||
# yes, a nursery which spawns `trio`-"actors" B)
|
# yes, a nursery which spawns `trio`-"actors" B)
|
||||||
an: ActorNursery
|
an: ActorNursery
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
loglevel='error',
|
loglevel='cancel',
|
||||||
# debug_mode=True,
|
# debug_mode=True,
|
||||||
) as an:
|
) as an:
|
||||||
|
|
||||||
|
|
@ -118,10 +118,8 @@ async def main() -> list[int]:
|
||||||
cancelled: bool = await portal.cancel_actor()
|
cancelled: bool = await portal.cancel_actor()
|
||||||
assert cancelled
|
assert cancelled
|
||||||
|
|
||||||
print(
|
print(f"STREAM TIME = {time.time() - start}")
|
||||||
f"STREAM TIME = {time.time() - start}\n"
|
print(f"STREAM + SPAWN TIME = {time.time() - pre_start}")
|
||||||
f"STREAM + SPAWN TIME = {time.time() - pre_start}\n"
|
|
||||||
)
|
|
||||||
assert result_stream == list(range(seed))
|
assert result_stream == list(range(seed))
|
||||||
return result_stream
|
return result_stream
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -11,17 +11,21 @@ import tractor
|
||||||
|
|
||||||
|
|
||||||
async def aio_echo_server(
|
async def aio_echo_server(
|
||||||
chan: tractor.to_asyncio.LinkedTaskChannel,
|
to_trio: trio.MemorySendChannel,
|
||||||
|
from_trio: asyncio.Queue,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
# a first message must be sent **from** this ``asyncio``
|
# a first message must be sent **from** this ``asyncio``
|
||||||
# task or the ``trio`` side will never unblock from
|
# task or the ``trio`` side will never unblock from
|
||||||
# ``tractor.to_asyncio.open_channel_from():``
|
# ``tractor.to_asyncio.open_channel_from():``
|
||||||
chan.started_nowait('start')
|
to_trio.send_nowait('start')
|
||||||
|
|
||||||
|
# XXX: this uses an ``from_trio: asyncio.Queue`` currently but we
|
||||||
|
# should probably offer something better.
|
||||||
while True:
|
while True:
|
||||||
# echo the msg back
|
# echo the msg back
|
||||||
chan.send_nowait(await chan.get())
|
to_trio.send_nowait(await from_trio.get())
|
||||||
await asyncio.sleep(0)
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -33,7 +37,7 @@ async def trio_to_aio_echo_server(
|
||||||
# message.
|
# message.
|
||||||
async with tractor.to_asyncio.open_channel_from(
|
async with tractor.to_asyncio.open_channel_from(
|
||||||
aio_echo_server,
|
aio_echo_server,
|
||||||
) as (chan, first):
|
) as (first, chan):
|
||||||
|
|
||||||
assert first == 'start'
|
assert first == 'start'
|
||||||
await ctx.started(first)
|
await ctx.started(first)
|
||||||
|
|
|
||||||
|
|
@ -1,5 +0,0 @@
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
async def child_fn() -> str:
|
|
||||||
return f"child OK pid={os.getpid()}"
|
|
||||||
|
|
@ -1,50 +0,0 @@
|
||||||
"""
|
|
||||||
Integration test: spawning tractor actors from an MPI process.
|
|
||||||
|
|
||||||
When a parent is launched via ``mpirun``, Open MPI sets ``OMPI_*`` env
|
|
||||||
vars that bind ``MPI_Init`` to the ``orted`` daemon. Tractor children
|
|
||||||
inherit those env vars, so if ``inherit_parent_main=True`` (the default)
|
|
||||||
the child re-executes ``__main__``, re-imports ``mpi4py``, and
|
|
||||||
``MPI_Init_thread`` fails because the child was never spawned by
|
|
||||||
``orted``::
|
|
||||||
|
|
||||||
getting local rank failed
|
|
||||||
--> Returned value No permission (-17) instead of ORTE_SUCCESS
|
|
||||||
|
|
||||||
Passing ``inherit_parent_main=False`` and placing RPC functions in a
|
|
||||||
separate importable module (``_child``) avoids the re-import entirely.
|
|
||||||
|
|
||||||
Usage::
|
|
||||||
|
|
||||||
mpirun --allow-run-as-root -np 1 python -m \
|
|
||||||
examples.integration.mpi4py.inherit_parent_main
|
|
||||||
"""
|
|
||||||
|
|
||||||
from mpi4py import MPI
|
|
||||||
|
|
||||||
import os
|
|
||||||
import trio
|
|
||||||
import tractor
|
|
||||||
|
|
||||||
from ._child import child_fn
|
|
||||||
|
|
||||||
|
|
||||||
async def main() -> None:
|
|
||||||
rank = MPI.COMM_WORLD.Get_rank()
|
|
||||||
print(f"[parent] rank={rank} pid={os.getpid()}", flush=True)
|
|
||||||
|
|
||||||
async with tractor.open_nursery(start_method='trio') as an:
|
|
||||||
portal = await an.start_actor(
|
|
||||||
'mpi-child',
|
|
||||||
enable_modules=[child_fn.__module__],
|
|
||||||
# Without this the child replays __main__, which
|
|
||||||
# re-imports mpi4py and crashes on MPI_Init.
|
|
||||||
inherit_parent_main=False,
|
|
||||||
)
|
|
||||||
result = await portal.run(child_fn)
|
|
||||||
print(f"[parent] got: {result}", flush=True)
|
|
||||||
await portal.cancel_actor()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
trio.run(main)
|
|
||||||
|
|
@ -10,7 +10,7 @@ async def main(service_name):
|
||||||
await an.start_actor(service_name)
|
await an.start_actor(service_name)
|
||||||
|
|
||||||
async with tractor.get_registry() as portal:
|
async with tractor.get_registry() as portal:
|
||||||
print(f"Registrar is listening on {portal.channel}")
|
print(f"Arbiter is listening on {portal.channel}")
|
||||||
|
|
||||||
async with tractor.wait_for_actor(service_name) as sockaddr:
|
async with tractor.wait_for_actor(service_name) as sockaddr:
|
||||||
print(f"my_service is found at {sockaddr}")
|
print(f"my_service is found at {sockaddr}")
|
||||||
|
|
|
||||||
27
flake.lock
27
flake.lock
|
|
@ -1,27 +0,0 @@
|
||||||
{
|
|
||||||
"nodes": {
|
|
||||||
"nixpkgs": {
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1769018530,
|
|
||||||
"narHash": "sha256-MJ27Cy2NtBEV5tsK+YraYr2g851f3Fl1LpNHDzDX15c=",
|
|
||||||
"owner": "nixos",
|
|
||||||
"repo": "nixpkgs",
|
|
||||||
"rev": "88d3861acdd3d2f0e361767018218e51810df8a1",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "nixos",
|
|
||||||
"ref": "nixos-unstable",
|
|
||||||
"repo": "nixpkgs",
|
|
||||||
"type": "github"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"root": {
|
|
||||||
"inputs": {
|
|
||||||
"nixpkgs": "nixpkgs"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"root": "root",
|
|
||||||
"version": 7
|
|
||||||
}
|
|
||||||
70
flake.nix
70
flake.nix
|
|
@ -1,70 +0,0 @@
|
||||||
# An "impure" template thx to `pyproject.nix`,
|
|
||||||
# https://pyproject-nix.github.io/pyproject.nix/templates.html#impure
|
|
||||||
# https://github.com/pyproject-nix/pyproject.nix/blob/master/templates/impure/flake.nix
|
|
||||||
{
|
|
||||||
description = "An impure overlay (w dev-shell) using `uv`";
|
|
||||||
|
|
||||||
inputs = {
|
|
||||||
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
|
|
||||||
};
|
|
||||||
|
|
||||||
outputs =
|
|
||||||
{ nixpkgs, ... }:
|
|
||||||
let
|
|
||||||
inherit (nixpkgs) lib;
|
|
||||||
forAllSystems = lib.genAttrs lib.systems.flakeExposed;
|
|
||||||
in
|
|
||||||
{
|
|
||||||
devShells = forAllSystems (
|
|
||||||
system:
|
|
||||||
let
|
|
||||||
pkgs = nixpkgs.legacyPackages.${system};
|
|
||||||
|
|
||||||
# XXX NOTE XXX, for now we overlay specific pkgs via
|
|
||||||
# a major-version-pinned-`cpython`
|
|
||||||
cpython = "python313";
|
|
||||||
venv_dir = "py313";
|
|
||||||
pypkgs = pkgs."${cpython}Packages";
|
|
||||||
in
|
|
||||||
{
|
|
||||||
default = pkgs.mkShell {
|
|
||||||
|
|
||||||
packages = [
|
|
||||||
# XXX, ensure sh completions activate!
|
|
||||||
pkgs.bashInteractive
|
|
||||||
pkgs.bash-completion
|
|
||||||
|
|
||||||
# XXX, on nix(os), use pkgs version to avoid
|
|
||||||
# build/sys-sh-integration issues
|
|
||||||
pkgs.ruff
|
|
||||||
|
|
||||||
pkgs.uv
|
|
||||||
pkgs.${cpython}# ?TODO^ how to set from `cpython` above?
|
|
||||||
];
|
|
||||||
|
|
||||||
shellHook = ''
|
|
||||||
# unmask to debug **this** dev-shell-hook
|
|
||||||
# set -e
|
|
||||||
|
|
||||||
# link-in c++ stdlib for various AOT-ext-pkgs (numpy, etc.)
|
|
||||||
LD_LIBRARY_PATH="${pkgs.stdenv.cc.cc.lib}/lib:$LD_LIBRARY_PATH"
|
|
||||||
|
|
||||||
export LD_LIBRARY_PATH
|
|
||||||
|
|
||||||
# RUNTIME-SETTINGS
|
|
||||||
# ------ uv ------
|
|
||||||
# - always use the ./py313/ venv-subdir
|
|
||||||
# - sync env with all extras
|
|
||||||
export UV_PROJECT_ENVIRONMENT=${venv_dir}
|
|
||||||
uv sync --dev --all-extras
|
|
||||||
|
|
||||||
# ------ TIPS ------
|
|
||||||
# NOTE, to launch the py-venv installed `xonsh` (like @goodboy)
|
|
||||||
# run the `nix develop` cmd with,
|
|
||||||
# >> nix develop -c uv run xonsh
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
}
|
|
||||||
);
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
@ -9,7 +9,7 @@ name = "tractor"
|
||||||
version = "0.1.0a6dev0"
|
version = "0.1.0a6dev0"
|
||||||
description = 'structured concurrent `trio`-"actors"'
|
description = 'structured concurrent `trio`-"actors"'
|
||||||
authors = [{ name = "Tyler Goodlet", email = "goodboy_foss@protonmail.com" }]
|
authors = [{ name = "Tyler Goodlet", email = "goodboy_foss@protonmail.com" }]
|
||||||
requires-python = ">=3.12, <3.14"
|
requires-python = ">= 3.11"
|
||||||
readme = "docs/README.rst"
|
readme = "docs/README.rst"
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
keywords = [
|
keywords = [
|
||||||
|
|
@ -24,13 +24,11 @@ keywords = [
|
||||||
classifiers = [
|
classifiers = [
|
||||||
"Development Status :: 3 - Alpha",
|
"Development Status :: 3 - Alpha",
|
||||||
"Operating System :: POSIX :: Linux",
|
"Operating System :: POSIX :: Linux",
|
||||||
"Operating System :: MacOS",
|
|
||||||
"Framework :: Trio",
|
"Framework :: Trio",
|
||||||
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
|
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
|
||||||
"Programming Language :: Python :: Implementation :: CPython",
|
"Programming Language :: Python :: Implementation :: CPython",
|
||||||
"Programming Language :: Python :: 3 :: Only",
|
"Programming Language :: Python :: 3 :: Only",
|
||||||
"Programming Language :: Python :: 3.12",
|
"Programming Language :: Python :: 3.11",
|
||||||
"Programming Language :: Python :: 3.13",
|
|
||||||
"Topic :: System :: Distributed Computing",
|
"Topic :: System :: Distributed Computing",
|
||||||
]
|
]
|
||||||
dependencies = [
|
dependencies = [
|
||||||
|
|
@ -44,65 +42,48 @@ dependencies = [
|
||||||
"wrapt>=1.16.0,<2",
|
"wrapt>=1.16.0,<2",
|
||||||
"colorlog>=6.8.2,<7",
|
"colorlog>=6.8.2,<7",
|
||||||
# built-in multi-actor `pdb` REPL
|
# built-in multi-actor `pdb` REPL
|
||||||
"pdbp>=1.8.2,<2", # windows only (from `pdbp`)
|
"pdbp>=1.6,<2", # windows only (from `pdbp`)
|
||||||
# typed IPC msging
|
# typed IPC msging
|
||||||
"msgspec>=0.21.0",
|
"msgspec>=0.19.0",
|
||||||
"cffi>=1.17.1",
|
"cffi>=1.17.1",
|
||||||
"bidict>=0.23.1",
|
"bidict>=0.23.1",
|
||||||
"multiaddr>=0.2.0",
|
|
||||||
"platformdirs>=4.4.0",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
# ------ project ------
|
# ------ project ------
|
||||||
|
|
||||||
[dependency-groups]
|
[dependency-groups]
|
||||||
dev = [
|
dev = [
|
||||||
{include-group = 'devx'},
|
|
||||||
{include-group = 'testing'},
|
|
||||||
{include-group = 'repl'},
|
|
||||||
]
|
|
||||||
devx = [
|
|
||||||
# `tractor.devx` tooling
|
|
||||||
"greenback>=1.2.1,<2",
|
|
||||||
"stackscope>=0.2.2,<0.3",
|
|
||||||
# ^ requires this?
|
|
||||||
"typing-extensions>=4.14.1",
|
|
||||||
]
|
|
||||||
testing = [
|
|
||||||
# test suite
|
# test suite
|
||||||
# TODO: maybe some of these layout choices?
|
# TODO: maybe some of these layout choices?
|
||||||
# https://docs.pytest.org/en/8.0.x/explanation/goodpractices.html#choosing-a-test-layout-import-rules
|
# https://docs.pytest.org/en/8.0.x/explanation/goodpractices.html#choosing-a-test-layout-import-rules
|
||||||
"pytest>=8.3.5",
|
"pytest>=8.3.5",
|
||||||
"pexpect>=4.9.0,<5",
|
"pexpect>=4.9.0,<5",
|
||||||
]
|
# `tractor.devx` tooling
|
||||||
repl = [
|
"greenback>=1.2.1,<2",
|
||||||
|
"stackscope>=0.2.2,<0.3",
|
||||||
|
# ^ requires this?
|
||||||
|
"typing-extensions>=4.14.1",
|
||||||
|
|
||||||
"pyperclip>=1.9.0",
|
"pyperclip>=1.9.0",
|
||||||
"prompt-toolkit>=3.0.50",
|
"prompt-toolkit>=3.0.50",
|
||||||
"xonsh>=0.22.8",
|
"xonsh>=0.19.2",
|
||||||
"psutil>=7.0.0",
|
"psutil>=7.0.0",
|
||||||
]
|
]
|
||||||
lint = [
|
|
||||||
"ruff>=0.9.6"
|
|
||||||
]
|
|
||||||
# TODO, add these with sane versions; were originally in
|
# TODO, add these with sane versions; were originally in
|
||||||
# `requirements-docs.txt`..
|
# `requirements-docs.txt`..
|
||||||
# docs = [
|
# docs = [
|
||||||
# "sphinx>="
|
# "sphinx>="
|
||||||
# "sphinx_book_theme>="
|
# "sphinx_book_theme>="
|
||||||
# ]
|
# ]
|
||||||
|
|
||||||
# ------ dependency-groups ------
|
# ------ dependency-groups ------
|
||||||
|
|
||||||
|
# ------ dependency-groups ------
|
||||||
|
|
||||||
[tool.uv.sources]
|
[tool.uv.sources]
|
||||||
# XXX NOTE, only for @goodboy's hacking on `pprint(sort_dicts=False)`
|
# XXX NOTE, only for @goodboy's hacking on `pprint(sort_dicts=False)`
|
||||||
# for the `pp` alias..
|
# for the `pp` alias..
|
||||||
|
# pdbp = { path = "../pdbp", editable = true }
|
||||||
# [tool.uv.sources.pdbp]
|
|
||||||
# XXX, in case we need to tmp patch again.
|
|
||||||
# git = "https://github.com/goodboy/pdbp.git"
|
|
||||||
# branch ="repair_stack_trace_frame_indexing"
|
|
||||||
# path = "../pdbp"
|
|
||||||
# editable = true
|
|
||||||
|
|
||||||
# ------ tool.uv.sources ------
|
# ------ tool.uv.sources ------
|
||||||
# TODO, distributed (multi-host) extensions
|
# TODO, distributed (multi-host) extensions
|
||||||
|
|
@ -164,7 +145,6 @@ all_bullets = true
|
||||||
|
|
||||||
[tool.pytest.ini_options]
|
[tool.pytest.ini_options]
|
||||||
minversion = '6.0'
|
minversion = '6.0'
|
||||||
# https://docs.pytest.org/en/stable/reference/reference.html#configuration-options
|
|
||||||
testpaths = [
|
testpaths = [
|
||||||
'tests'
|
'tests'
|
||||||
]
|
]
|
||||||
|
|
@ -175,17 +155,10 @@ addopts = [
|
||||||
'--import-mode=importlib',
|
'--import-mode=importlib',
|
||||||
# don't show frickin captured logs AGAIN in the report..
|
# don't show frickin captured logs AGAIN in the report..
|
||||||
'--show-capture=no',
|
'--show-capture=no',
|
||||||
|
|
||||||
# disable `xonsh` plugin
|
|
||||||
# https://docs.pytest.org/en/stable/how-to/plugins.html#disabling-plugins-from-autoloading
|
|
||||||
# https://docs.pytest.org/en/stable/how-to/plugins.html#deactivating-unregistering-a-plugin-by-name
|
|
||||||
'-p no:xonsh'
|
|
||||||
]
|
]
|
||||||
log_cli = false
|
log_cli = false
|
||||||
# TODO: maybe some of these layout choices?
|
# TODO: maybe some of these layout choices?
|
||||||
# https://docs.pytest.org/en/8.0.x/explanation/goodpractices.html#choosing-a-test-layout-import-rules
|
# https://docs.pytest.org/en/8.0.x/explanation/goodpractices.html#choosing-a-test-layout-import-rules
|
||||||
# pythonpath = "src"
|
# pythonpath = "src"
|
||||||
|
|
||||||
# https://docs.pytest.org/en/stable/reference/reference.html#confval-console_output_style
|
|
||||||
console_output_style = 'progress'
|
|
||||||
# ------ tool.pytest ------
|
# ------ tool.pytest ------
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,8 @@
|
||||||
|
# vim: ft=ini
|
||||||
|
# pytest.ini for tractor
|
||||||
|
|
||||||
|
[pytest]
|
||||||
|
# don't show frickin captured logs AGAIN in the report..
|
||||||
|
addopts = --show-capture='no'
|
||||||
|
log_cli = false
|
||||||
|
; minversion = 6.0
|
||||||
|
|
@ -35,8 +35,8 @@ exclude = [
|
||||||
line-length = 88
|
line-length = 88
|
||||||
indent-width = 4
|
indent-width = 4
|
||||||
|
|
||||||
# assume latest minor cpython
|
# Assume Python 3.9
|
||||||
target-version = "py313"
|
target-version = "py311"
|
||||||
|
|
||||||
[lint]
|
[lint]
|
||||||
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
|
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
|
||||||
|
|
|
||||||
|
|
@ -9,11 +9,8 @@ import os
|
||||||
import signal
|
import signal
|
||||||
import platform
|
import platform
|
||||||
import time
|
import time
|
||||||
from pathlib import Path
|
|
||||||
from typing import Literal
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
import tractor
|
|
||||||
from tractor._testing import (
|
from tractor._testing import (
|
||||||
examples_dir as examples_dir,
|
examples_dir as examples_dir,
|
||||||
tractor_test as tractor_test,
|
tractor_test as tractor_test,
|
||||||
|
|
@ -25,8 +22,6 @@ pytest_plugins: list[str] = [
|
||||||
'tractor._testing.pytest',
|
'tractor._testing.pytest',
|
||||||
]
|
]
|
||||||
|
|
||||||
_ci_env: bool = os.environ.get('CI', False)
|
|
||||||
_non_linux: bool = platform.system() != 'Linux'
|
|
||||||
|
|
||||||
# Sending signal.SIGINT on subprocess fails on windows. Use CTRL_* alternatives
|
# Sending signal.SIGINT on subprocess fails on windows. Use CTRL_* alternatives
|
||||||
if platform.system() == 'Windows':
|
if platform.system() == 'Windows':
|
||||||
|
|
@ -39,8 +34,9 @@ else:
|
||||||
_INT_SIGNAL = signal.SIGINT
|
_INT_SIGNAL = signal.SIGINT
|
||||||
_INT_RETURN_CODE = 1 if sys.version_info < (3, 8) else -signal.SIGINT.value
|
_INT_RETURN_CODE = 1 if sys.version_info < (3, 8) else -signal.SIGINT.value
|
||||||
_PROC_SPAWN_WAIT = (
|
_PROC_SPAWN_WAIT = (
|
||||||
2 if _ci_env
|
0.6
|
||||||
else 1
|
if sys.version_info < (3, 7)
|
||||||
|
else 0.4
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -48,80 +44,6 @@ no_windows = pytest.mark.skipif(
|
||||||
platform.system() == "Windows",
|
platform.system() == "Windows",
|
||||||
reason="Test is unsupported on windows",
|
reason="Test is unsupported on windows",
|
||||||
)
|
)
|
||||||
no_macos = pytest.mark.skipif(
|
|
||||||
platform.system() == "Darwin",
|
|
||||||
reason="Test is unsupported on MacOS",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_cpu_state(
|
|
||||||
icpu: int = 0,
|
|
||||||
setting: Literal[
|
|
||||||
'scaling_governor',
|
|
||||||
'*_pstate_max_freq',
|
|
||||||
'scaling_max_freq',
|
|
||||||
# 'scaling_cur_freq',
|
|
||||||
] = '*_pstate_max_freq',
|
|
||||||
) -> tuple[
|
|
||||||
Path,
|
|
||||||
str|int,
|
|
||||||
]|None:
|
|
||||||
'''
|
|
||||||
Attempt to read the (first) CPU's setting according
|
|
||||||
to the set `setting` from under the file-sys,
|
|
||||||
|
|
||||||
/sys/devices/system/cpu/cpu0/cpufreq/{setting}
|
|
||||||
|
|
||||||
Useful to determine latency headroom for various perf affected
|
|
||||||
test suites.
|
|
||||||
|
|
||||||
'''
|
|
||||||
try:
|
|
||||||
# Read governor for core 0 (usually same for all)
|
|
||||||
setting_path: Path = list(
|
|
||||||
Path(f'/sys/devices/system/cpu/cpu{icpu}/cpufreq/')
|
|
||||||
.glob(f'{setting}')
|
|
||||||
)[0] # <- XXX must be single match!
|
|
||||||
with open(
|
|
||||||
setting_path,
|
|
||||||
'r',
|
|
||||||
) as f:
|
|
||||||
return (
|
|
||||||
setting_path,
|
|
||||||
f.read().strip(),
|
|
||||||
)
|
|
||||||
except (FileNotFoundError, IndexError):
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def cpu_scaling_factor() -> float:
|
|
||||||
'''
|
|
||||||
Return a latency-headroom multiplier (>= 1.0) reflecting how
|
|
||||||
much to inflate time-limits when CPU-freq scaling is active on
|
|
||||||
linux.
|
|
||||||
|
|
||||||
When no scaling info is available (non-linux, missing sysfs),
|
|
||||||
returns 1.0 (i.e. no headroom adjustment needed).
|
|
||||||
|
|
||||||
'''
|
|
||||||
if _non_linux:
|
|
||||||
return 1.
|
|
||||||
|
|
||||||
mx = get_cpu_state()
|
|
||||||
cur = get_cpu_state(setting='scaling_max_freq')
|
|
||||||
if mx is None or cur is None:
|
|
||||||
return 1.
|
|
||||||
|
|
||||||
_mx_pth, max_freq = mx
|
|
||||||
_cur_pth, cur_freq = cur
|
|
||||||
cpu_scaled: float = int(cur_freq) / int(max_freq)
|
|
||||||
|
|
||||||
if cpu_scaled != 1.:
|
|
||||||
return 1. / (
|
|
||||||
cpu_scaled * 2 # <- bc likely "dual threaded"
|
|
||||||
)
|
|
||||||
|
|
||||||
return 1.
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_addoption(
|
def pytest_addoption(
|
||||||
|
|
@ -139,52 +61,16 @@ def pytest_addoption(
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope='session', autouse=True)
|
@pytest.fixture(scope='session', autouse=True)
|
||||||
def loglevel(request) -> str:
|
def loglevel(request):
|
||||||
import tractor
|
import tractor
|
||||||
orig = tractor.log._default_loglevel
|
orig = tractor.log._default_loglevel
|
||||||
level = tractor.log._default_loglevel = request.config.option.loglevel
|
level = tractor.log._default_loglevel = request.config.option.loglevel
|
||||||
log = tractor.log.get_console_log(
|
tractor.log.get_console_log(level)
|
||||||
level=level,
|
|
||||||
name='tractor', # <- enable root logger
|
|
||||||
)
|
|
||||||
log.info(
|
|
||||||
f'Test-harness set runtime loglevel: {level!r}\n'
|
|
||||||
)
|
|
||||||
yield level
|
yield level
|
||||||
tractor.log._default_loglevel = orig
|
tractor.log._default_loglevel = orig
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope='function')
|
_ci_env: bool = os.environ.get('CI', False)
|
||||||
def test_log(
|
|
||||||
request,
|
|
||||||
loglevel: str,
|
|
||||||
) -> tractor.log.StackLevelAdapter:
|
|
||||||
'''
|
|
||||||
Deliver a per test-module-fn logger instance for reporting from
|
|
||||||
within actual test bodies/fixtures.
|
|
||||||
|
|
||||||
For example this can be handy to report certain error cases from
|
|
||||||
exception handlers using `test_log.exception()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
modname: str = request.function.__module__
|
|
||||||
log = tractor.log.get_logger(
|
|
||||||
name=modname, # <- enable root logger
|
|
||||||
# pkg_name='tests',
|
|
||||||
)
|
|
||||||
_log = tractor.log.get_console_log(
|
|
||||||
level=loglevel,
|
|
||||||
logger=log,
|
|
||||||
name=modname,
|
|
||||||
# pkg_name='tests',
|
|
||||||
)
|
|
||||||
_log.debug(
|
|
||||||
f'In-test-logging requested\n'
|
|
||||||
f'test_log.name: {log.name!r}\n'
|
|
||||||
f'level: {loglevel!r}\n'
|
|
||||||
|
|
||||||
)
|
|
||||||
yield _log
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope='session')
|
@pytest.fixture(scope='session')
|
||||||
|
|
@ -199,39 +85,15 @@ def ci_env() -> bool:
|
||||||
def sig_prog(
|
def sig_prog(
|
||||||
proc: subprocess.Popen,
|
proc: subprocess.Popen,
|
||||||
sig: int,
|
sig: int,
|
||||||
canc_timeout: float = 0.2,
|
canc_timeout: float = 0.1,
|
||||||
tries: int = 3,
|
|
||||||
) -> int:
|
) -> int:
|
||||||
'''
|
"Kill the actor-process with ``sig``."
|
||||||
Kill the actor-process with `sig`.
|
|
||||||
|
|
||||||
Prefer to kill with the provided signal and
|
|
||||||
failing a `canc_timeout`, send a `SIKILL`-like
|
|
||||||
to ensure termination.
|
|
||||||
|
|
||||||
'''
|
|
||||||
for i in range(tries):
|
|
||||||
proc.send_signal(sig)
|
proc.send_signal(sig)
|
||||||
if proc.poll() is None:
|
|
||||||
print(
|
|
||||||
f'WARNING, proc still alive after,\n'
|
|
||||||
f'canc_timeout={canc_timeout!r}\n'
|
|
||||||
f'sig={sig!r}\n'
|
|
||||||
f'\n'
|
|
||||||
f'{proc.args!r}\n'
|
|
||||||
)
|
|
||||||
time.sleep(canc_timeout)
|
time.sleep(canc_timeout)
|
||||||
else:
|
if not proc.poll():
|
||||||
# TODO: why sometimes does SIGINT not work on teardown?
|
# TODO: why sometimes does SIGINT not work on teardown?
|
||||||
# seems to happen only when trace logging enabled?
|
# seems to happen only when trace logging enabled?
|
||||||
if proc.poll() is None:
|
|
||||||
print(
|
|
||||||
f'XXX WARNING KILLING PROG WITH SIGINT XXX\n'
|
|
||||||
f'canc_timeout={canc_timeout!r}\n'
|
|
||||||
f'{proc.args!r}\n'
|
|
||||||
)
|
|
||||||
proc.send_signal(_KILL_SIGNAL)
|
proc.send_signal(_KILL_SIGNAL)
|
||||||
|
|
||||||
ret: int = proc.wait()
|
ret: int = proc.wait()
|
||||||
assert ret
|
assert ret
|
||||||
|
|
||||||
|
|
@ -244,8 +106,6 @@ def daemon(
|
||||||
testdir: pytest.Pytester,
|
testdir: pytest.Pytester,
|
||||||
reg_addr: tuple[str, int],
|
reg_addr: tuple[str, int],
|
||||||
tpt_proto: str,
|
tpt_proto: str,
|
||||||
ci_env: bool,
|
|
||||||
test_log: tractor.log.StackLevelAdapter,
|
|
||||||
|
|
||||||
) -> subprocess.Popen:
|
) -> subprocess.Popen:
|
||||||
'''
|
'''
|
||||||
|
|
@ -261,12 +121,10 @@ def daemon(
|
||||||
"import tractor; "
|
"import tractor; "
|
||||||
"tractor.run_daemon([], "
|
"tractor.run_daemon([], "
|
||||||
"registry_addrs={reg_addrs}, "
|
"registry_addrs={reg_addrs}, "
|
||||||
"enable_transports={enable_tpts}, "
|
|
||||||
"debug_mode={debug_mode}, "
|
"debug_mode={debug_mode}, "
|
||||||
"loglevel={ll})"
|
"loglevel={ll})"
|
||||||
).format(
|
).format(
|
||||||
reg_addrs=str([reg_addr]),
|
reg_addrs=str([reg_addr]),
|
||||||
enable_tpts=str([tpt_proto]),
|
|
||||||
ll="'{}'".format(loglevel) if loglevel else None,
|
ll="'{}'".format(loglevel) if loglevel else None,
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
)
|
)
|
||||||
|
|
@ -285,25 +143,13 @@ def daemon(
|
||||||
**kwargs,
|
**kwargs,
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO! we should poll for the registry socket-bind to take place
|
|
||||||
# and only once that's done yield to the requester!
|
|
||||||
# -[ ] TCP: use the `._root.open_root_actor()`::`ping_tpt_socket()`
|
|
||||||
# closure!
|
|
||||||
# -[ ] UDS: can we do something similar for 'pinging" the
|
|
||||||
# file-socket?
|
|
||||||
#
|
|
||||||
global _PROC_SPAWN_WAIT
|
|
||||||
# UDS sockets are **really** fast to bind()/listen()/connect()
|
# UDS sockets are **really** fast to bind()/listen()/connect()
|
||||||
# so it's often required that we delay a bit more starting
|
# so it's often required that we delay a bit more starting
|
||||||
# the first actor-tree..
|
# the first actor-tree..
|
||||||
if tpt_proto == 'uds':
|
if tpt_proto == 'uds':
|
||||||
_PROC_SPAWN_WAIT += 1.6
|
global _PROC_SPAWN_WAIT
|
||||||
|
_PROC_SPAWN_WAIT = 0.6
|
||||||
|
|
||||||
if _non_linux and ci_env:
|
|
||||||
_PROC_SPAWN_WAIT += 1
|
|
||||||
|
|
||||||
# XXX, allow time for the sub-py-proc to boot up.
|
|
||||||
# !TODO, see ping-polling ideas above!
|
|
||||||
time.sleep(_PROC_SPAWN_WAIT)
|
time.sleep(_PROC_SPAWN_WAIT)
|
||||||
|
|
||||||
assert not proc.returncode
|
assert not proc.returncode
|
||||||
|
|
@ -313,30 +159,18 @@ def daemon(
|
||||||
# XXX! yeah.. just be reaaal careful with this bc sometimes it
|
# XXX! yeah.. just be reaaal careful with this bc sometimes it
|
||||||
# can lock up on the `_io.BufferedReader` and hang..
|
# can lock up on the `_io.BufferedReader` and hang..
|
||||||
stderr: str = proc.stderr.read().decode()
|
stderr: str = proc.stderr.read().decode()
|
||||||
stdout: str = proc.stdout.read().decode()
|
if stderr:
|
||||||
if (
|
|
||||||
stderr
|
|
||||||
or
|
|
||||||
stdout
|
|
||||||
):
|
|
||||||
print(
|
print(
|
||||||
f'Daemon actor tree produced output:\n'
|
f'Daemon actor tree produced STDERR:\n'
|
||||||
f'{proc.args}\n'
|
f'{proc.args}\n'
|
||||||
f'\n'
|
f'\n'
|
||||||
f'stderr: {stderr!r}\n'
|
f'{stderr}\n'
|
||||||
f'stdout: {stdout!r}\n'
|
|
||||||
)
|
)
|
||||||
|
if proc.returncode != -2:
|
||||||
if (rc := proc.returncode) != -2:
|
raise RuntimeError(
|
||||||
msg: str = (
|
'Daemon actor tree failed !?\n'
|
||||||
f'Daemon actor tree was not cancelled !?\n'
|
f'{proc.args}\n'
|
||||||
f'proc.args: {proc.args!r}\n'
|
|
||||||
f'proc.returncode: {rc!r}\n'
|
|
||||||
)
|
)
|
||||||
if rc < 0:
|
|
||||||
raise RuntimeError(msg)
|
|
||||||
|
|
||||||
test_log.error(msg)
|
|
||||||
|
|
||||||
|
|
||||||
# @pytest.fixture(autouse=True)
|
# @pytest.fixture(autouse=True)
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,6 @@
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
import platform
|
|
||||||
import signal
|
|
||||||
import time
|
import time
|
||||||
from typing import (
|
from typing import (
|
||||||
Callable,
|
Callable,
|
||||||
|
|
@ -34,23 +32,9 @@ if TYPE_CHECKING:
|
||||||
from pexpect import pty_spawn
|
from pexpect import pty_spawn
|
||||||
|
|
||||||
|
|
||||||
_non_linux: bool = platform.system() != 'Linux'
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_configure(config):
|
|
||||||
# register custom marks to avoid warnings see,
|
|
||||||
# https://docs.pytest.org/en/stable/how-to/writing_plugins.html#registering-custom-markers
|
|
||||||
config.addinivalue_line(
|
|
||||||
'markers',
|
|
||||||
'ctlcs_bish: test will (likely) not behave under SIGINT..'
|
|
||||||
)
|
|
||||||
|
|
||||||
# a fn that sub-instantiates a `pexpect.spawn()`
|
# a fn that sub-instantiates a `pexpect.spawn()`
|
||||||
# and returns it.
|
# and returns it.
|
||||||
type PexpectSpawner = Callable[
|
type PexpectSpawner = Callable[[str], pty_spawn.spawn]
|
||||||
[str],
|
|
||||||
pty_spawn.spawn,
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
|
|
@ -80,65 +64,26 @@ def spawn(
|
||||||
|
|
||||||
'''
|
'''
|
||||||
import os
|
import os
|
||||||
# disable colored tbs
|
|
||||||
os.environ['PYTHON_COLORS'] = '0'
|
os.environ['PYTHON_COLORS'] = '0'
|
||||||
# disable all ANSI color output
|
|
||||||
# os.environ['NO_COLOR'] = '1'
|
|
||||||
|
|
||||||
spawned: PexpectSpawner|None = None
|
|
||||||
|
|
||||||
def _spawn(
|
def _spawn(
|
||||||
cmd: str,
|
cmd: str,
|
||||||
expect_timeout: float = 4,
|
|
||||||
**mkcmd_kwargs,
|
**mkcmd_kwargs,
|
||||||
) -> pty_spawn.spawn:
|
) -> pty_spawn.spawn:
|
||||||
nonlocal spawned
|
|
||||||
unset_colors()
|
unset_colors()
|
||||||
spawned = testdir.spawn(
|
return testdir.spawn(
|
||||||
cmd=mk_cmd(
|
cmd=mk_cmd(
|
||||||
cmd,
|
cmd,
|
||||||
**mkcmd_kwargs,
|
**mkcmd_kwargs,
|
||||||
),
|
),
|
||||||
expect_timeout=(timeout:=(
|
expect_timeout=3,
|
||||||
expect_timeout + 6
|
|
||||||
if _non_linux and _ci_env
|
|
||||||
else expect_timeout
|
|
||||||
)),
|
|
||||||
# preexec_fn=unset_colors,
|
# preexec_fn=unset_colors,
|
||||||
# ^TODO? get `pytest` core to expose underlying
|
# ^TODO? get `pytest` core to expose underlying
|
||||||
# `pexpect.spawn()` stuff?
|
# `pexpect.spawn()` stuff?
|
||||||
)
|
)
|
||||||
# sanity
|
|
||||||
assert spawned.timeout == timeout
|
|
||||||
return spawned
|
|
||||||
|
|
||||||
# such that test-dep can pass input script name.
|
# such that test-dep can pass input script name.
|
||||||
yield _spawn # the `PexpectSpawner`, type alias.
|
return _spawn # the `PexpectSpawner`, type alias.
|
||||||
|
|
||||||
if (
|
|
||||||
spawned
|
|
||||||
and
|
|
||||||
(ptyproc := spawned.ptyproc)
|
|
||||||
):
|
|
||||||
start: float = time.time()
|
|
||||||
timeout: float = 5
|
|
||||||
while (
|
|
||||||
ptyproc.isalive()
|
|
||||||
and
|
|
||||||
(
|
|
||||||
(_time_took := (time.time() - start))
|
|
||||||
<
|
|
||||||
timeout
|
|
||||||
)
|
|
||||||
):
|
|
||||||
ptyproc.kill(signal.SIGINT)
|
|
||||||
time.sleep(0.01)
|
|
||||||
|
|
||||||
if ptyproc.isalive():
|
|
||||||
ptyproc.kill(signal.SIGKILL)
|
|
||||||
|
|
||||||
# TODO? ensure we've cleaned up any UDS-paths?
|
|
||||||
# breakpoint()
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(
|
@pytest.fixture(
|
||||||
|
|
@ -164,13 +109,7 @@ def ctlc(
|
||||||
'https://github.com/goodboy/tractor/issues/320'
|
'https://github.com/goodboy/tractor/issues/320'
|
||||||
)
|
)
|
||||||
|
|
||||||
if (
|
if mark.name == 'ctlcs_bish':
|
||||||
mark.name == 'ctlcs_bish'
|
|
||||||
and
|
|
||||||
use_ctlc
|
|
||||||
and
|
|
||||||
all(mark.args)
|
|
||||||
):
|
|
||||||
pytest.skip(
|
pytest.skip(
|
||||||
f'Test {node} prolly uses something from the stdlib (namely `asyncio`..)\n'
|
f'Test {node} prolly uses something from the stdlib (namely `asyncio`..)\n'
|
||||||
f'The test and/or underlying example script can *sometimes* run fine '
|
f'The test and/or underlying example script can *sometimes* run fine '
|
||||||
|
|
@ -275,13 +214,12 @@ def assert_before(
|
||||||
err_on_false=True,
|
err_on_false=True,
|
||||||
**kwargs
|
**kwargs
|
||||||
)
|
)
|
||||||
return str(child.before.decode())
|
|
||||||
|
|
||||||
|
|
||||||
def do_ctlc(
|
def do_ctlc(
|
||||||
child,
|
child,
|
||||||
count: int = 3,
|
count: int = 3,
|
||||||
delay: float|None = None,
|
delay: float = 0.1,
|
||||||
patt: str|None = None,
|
patt: str|None = None,
|
||||||
|
|
||||||
# expect repl UX to reprint the prompt after every
|
# expect repl UX to reprint the prompt after every
|
||||||
|
|
@ -293,7 +231,6 @@ def do_ctlc(
|
||||||
) -> str|None:
|
) -> str|None:
|
||||||
|
|
||||||
before: str|None = None
|
before: str|None = None
|
||||||
delay = delay or 0.1
|
|
||||||
|
|
||||||
# make sure ctl-c sends don't do anything but repeat output
|
# make sure ctl-c sends don't do anything but repeat output
|
||||||
for _ in range(count):
|
for _ in range(count):
|
||||||
|
|
@ -304,10 +241,7 @@ def do_ctlc(
|
||||||
# if you run this test manually it works just fine..
|
# if you run this test manually it works just fine..
|
||||||
if expect_prompt:
|
if expect_prompt:
|
||||||
time.sleep(delay)
|
time.sleep(delay)
|
||||||
child.expect(
|
child.expect(PROMPT)
|
||||||
PROMPT,
|
|
||||||
timeout=(child.timeout * 2) if _ci_env else child.timeout,
|
|
||||||
)
|
|
||||||
before = str(child.before.decode())
|
before = str(child.before.decode())
|
||||||
time.sleep(delay)
|
time.sleep(delay)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -37,9 +37,6 @@ from .conftest import (
|
||||||
in_prompt_msg,
|
in_prompt_msg,
|
||||||
assert_before,
|
assert_before,
|
||||||
)
|
)
|
||||||
from ..conftest import (
|
|
||||||
_ci_env,
|
|
||||||
)
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ..conftest import PexpectSpawner
|
from ..conftest import PexpectSpawner
|
||||||
|
|
@ -54,14 +51,13 @@ if TYPE_CHECKING:
|
||||||
# - recurrent root errors
|
# - recurrent root errors
|
||||||
|
|
||||||
|
|
||||||
_non_linux: bool = platform.system() != 'Linux'
|
|
||||||
|
|
||||||
if platform.system() == 'Windows':
|
if platform.system() == 'Windows':
|
||||||
pytest.skip(
|
pytest.skip(
|
||||||
'Debugger tests have no windows support (yet)',
|
'Debugger tests have no windows support (yet)',
|
||||||
allow_module_level=True,
|
allow_module_level=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# TODO: was trying to this xfail style but some weird bug i see in CI
|
# TODO: was trying to this xfail style but some weird bug i see in CI
|
||||||
# that's happening at collect time.. pretty soon gonna dump actions i'm
|
# that's happening at collect time.. pretty soon gonna dump actions i'm
|
||||||
# thinkin...
|
# thinkin...
|
||||||
|
|
@ -197,11 +193,6 @@ def test_root_actor_bp_forever(
|
||||||
child.expect(EOF)
|
child.expect(EOF)
|
||||||
|
|
||||||
|
|
||||||
# skip on non-Linux CI
|
|
||||||
@pytest.mark.ctlcs_bish(
|
|
||||||
_non_linux,
|
|
||||||
_ci_env,
|
|
||||||
)
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
'do_next',
|
'do_next',
|
||||||
(True, False),
|
(True, False),
|
||||||
|
|
@ -267,11 +258,6 @@ def test_subactor_error(
|
||||||
child.expect(EOF)
|
child.expect(EOF)
|
||||||
|
|
||||||
|
|
||||||
# skip on non-Linux CI
|
|
||||||
@pytest.mark.ctlcs_bish(
|
|
||||||
_non_linux,
|
|
||||||
_ci_env,
|
|
||||||
)
|
|
||||||
def test_subactor_breakpoint(
|
def test_subactor_breakpoint(
|
||||||
spawn,
|
spawn,
|
||||||
ctlc: bool,
|
ctlc: bool,
|
||||||
|
|
@ -494,24 +480,8 @@ def test_multi_daemon_subactors(
|
||||||
stream.
|
stream.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
non_linux = _non_linux
|
|
||||||
if non_linux and ctlc:
|
|
||||||
pytest.skip(
|
|
||||||
'Ctl-c + MacOS is too unreliable/racy for this test..\n'
|
|
||||||
)
|
|
||||||
# !TODO, if someone with more patience then i wants to muck
|
|
||||||
# with the timings on this please feel free to see all the
|
|
||||||
# `non_linux` branching logic i added on my first attempt
|
|
||||||
# below!
|
|
||||||
#
|
|
||||||
# my conclusion was that if i were to run the script
|
|
||||||
# manually, and thus as slowly as a human would, the test
|
|
||||||
# would and should pass as described in this test fn, however
|
|
||||||
# after fighting with it for >= 1hr. i decided more then
|
|
||||||
# likely the more extensive `linux` testing should cover most
|
|
||||||
# regressions.
|
|
||||||
|
|
||||||
child = spawn('multi_daemon_subactors')
|
child = spawn('multi_daemon_subactors')
|
||||||
|
|
||||||
child.expect(PROMPT)
|
child.expect(PROMPT)
|
||||||
|
|
||||||
# there can be a race for which subactor will acquire
|
# there can be a race for which subactor will acquire
|
||||||
|
|
@ -541,19 +511,8 @@ def test_multi_daemon_subactors(
|
||||||
else:
|
else:
|
||||||
raise ValueError('Neither log msg was found !?')
|
raise ValueError('Neither log msg was found !?')
|
||||||
|
|
||||||
non_linux_delay: float = 0.3
|
|
||||||
if ctlc:
|
if ctlc:
|
||||||
do_ctlc(
|
do_ctlc(child)
|
||||||
child,
|
|
||||||
delay=(
|
|
||||||
non_linux_delay
|
|
||||||
if non_linux
|
|
||||||
else None
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
if non_linux:
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# NOTE: previously since we did not have clobber prevention
|
# NOTE: previously since we did not have clobber prevention
|
||||||
# in the root actor this final resume could result in the debugger
|
# in the root actor this final resume could result in the debugger
|
||||||
|
|
@ -584,66 +543,33 @@ def test_multi_daemon_subactors(
|
||||||
# assert "in use by child ('bp_forever'," in before
|
# assert "in use by child ('bp_forever'," in before
|
||||||
|
|
||||||
if ctlc:
|
if ctlc:
|
||||||
do_ctlc(
|
do_ctlc(child)
|
||||||
child,
|
|
||||||
delay=(
|
|
||||||
non_linux_delay
|
|
||||||
if non_linux
|
|
||||||
else None
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
if non_linux:
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# expect another breakpoint actor entry
|
# expect another breakpoint actor entry
|
||||||
child.sendline('c')
|
child.sendline('c')
|
||||||
child.expect(PROMPT)
|
child.expect(PROMPT)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
before: str = assert_before(
|
assert_before(
|
||||||
child,
|
child,
|
||||||
bp_forev_parts,
|
bp_forev_parts,
|
||||||
)
|
)
|
||||||
except AssertionError:
|
except AssertionError:
|
||||||
before: str = assert_before(
|
assert_before(
|
||||||
child,
|
child,
|
||||||
name_error_parts,
|
name_error_parts,
|
||||||
)
|
)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if ctlc:
|
if ctlc:
|
||||||
before: str = do_ctlc(
|
do_ctlc(child)
|
||||||
child,
|
|
||||||
delay=(
|
|
||||||
non_linux_delay
|
|
||||||
if non_linux
|
|
||||||
else None
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
if non_linux:
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# should crash with the 2nd name error (simulates
|
# should crash with the 2nd name error (simulates
|
||||||
# a retry) and then the root eventually (boxed) errors
|
# a retry) and then the root eventually (boxed) errors
|
||||||
# after 1 or more further bp actor entries.
|
# after 1 or more further bp actor entries.
|
||||||
|
|
||||||
child.sendline('c')
|
child.sendline('c')
|
||||||
try:
|
child.expect(PROMPT)
|
||||||
child.expect(
|
|
||||||
PROMPT,
|
|
||||||
timeout=3,
|
|
||||||
)
|
|
||||||
except EOF:
|
|
||||||
before: str = child.before.decode()
|
|
||||||
print(
|
|
||||||
f'\n'
|
|
||||||
f'??? NEVER RXED `pdb` PROMPT ???\n'
|
|
||||||
f'\n'
|
|
||||||
f'{before}\n'
|
|
||||||
)
|
|
||||||
raise
|
|
||||||
|
|
||||||
assert_before(
|
assert_before(
|
||||||
child,
|
child,
|
||||||
name_error_parts,
|
name_error_parts,
|
||||||
|
|
@ -763,8 +689,7 @@ def test_multi_subactors_root_errors(
|
||||||
|
|
||||||
@has_nested_actors
|
@has_nested_actors
|
||||||
def test_multi_nested_subactors_error_through_nurseries(
|
def test_multi_nested_subactors_error_through_nurseries(
|
||||||
ci_env: bool,
|
spawn,
|
||||||
spawn: PexpectSpawner,
|
|
||||||
|
|
||||||
# TODO: address debugger issue for nested tree:
|
# TODO: address debugger issue for nested tree:
|
||||||
# https://github.com/goodboy/tractor/issues/320
|
# https://github.com/goodboy/tractor/issues/320
|
||||||
|
|
@ -785,29 +710,9 @@ def test_multi_nested_subactors_error_through_nurseries(
|
||||||
|
|
||||||
# timed_out_early: bool = False
|
# timed_out_early: bool = False
|
||||||
|
|
||||||
for (
|
for send_char in itertools.cycle(['c', 'q']):
|
||||||
i,
|
|
||||||
send_char,
|
|
||||||
) in enumerate(itertools.cycle(['c', 'q'])):
|
|
||||||
|
|
||||||
timeout: float = -1
|
|
||||||
if (
|
|
||||||
_non_linux
|
|
||||||
and
|
|
||||||
ci_env
|
|
||||||
):
|
|
||||||
timeout: float = 6
|
|
||||||
|
|
||||||
# XXX linux but the first crash sequence
|
|
||||||
# can take longer to arrive at a prompt.
|
|
||||||
elif i == 0:
|
|
||||||
timeout = 5
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
child.expect(
|
child.expect(PROMPT)
|
||||||
PROMPT,
|
|
||||||
timeout=timeout,
|
|
||||||
)
|
|
||||||
child.sendline(send_char)
|
child.sendline(send_char)
|
||||||
time.sleep(0.01)
|
time.sleep(0.01)
|
||||||
|
|
||||||
|
|
@ -984,11 +889,6 @@ def test_different_debug_mode_per_actor(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# skip on non-Linux CI
|
|
||||||
@pytest.mark.ctlcs_bish(
|
|
||||||
_non_linux,
|
|
||||||
_ci_env,
|
|
||||||
)
|
|
||||||
def test_post_mortem_api(
|
def test_post_mortem_api(
|
||||||
spawn,
|
spawn,
|
||||||
ctlc: bool,
|
ctlc: bool,
|
||||||
|
|
@ -1233,21 +1133,12 @@ def test_ctxep_pauses_n_maybe_ipc_breaks(
|
||||||
# closed so verify we see error reporting as well as
|
# closed so verify we see error reporting as well as
|
||||||
# a failed crash-REPL request msg and can CTL-c our way
|
# a failed crash-REPL request msg and can CTL-c our way
|
||||||
# out.
|
# out.
|
||||||
|
|
||||||
# ?TODO, match depending on `tpt_proto(s)`?
|
|
||||||
# - [ ] how can we pass it into the script tho?
|
|
||||||
tpt: str = 'UDS'
|
|
||||||
if _non_linux:
|
|
||||||
tpt: str = 'TCP'
|
|
||||||
|
|
||||||
assert_before(
|
assert_before(
|
||||||
child,
|
child,
|
||||||
['peer IPC channel closed abruptly?',
|
['peer IPC channel closed abruptly?',
|
||||||
'another task closed this fd',
|
'another task closed this fd',
|
||||||
'Debug lock request was CANCELLED?',
|
'Debug lock request was CANCELLED?',
|
||||||
f"'Msgpack{tpt}Stream' was already closed locally?",
|
"TransportClosed: 'MsgpackUDSStream' was already closed locally ?",]
|
||||||
f"TransportClosed: 'Msgpack{tpt}Stream' was already closed 'by peer'?",
|
|
||||||
]
|
|
||||||
|
|
||||||
# XXX races on whether these show/hit?
|
# XXX races on whether these show/hit?
|
||||||
# 'Failed to REPl via `_pause()` You called `tractor.pause()` from an already cancelled scope!',
|
# 'Failed to REPl via `_pause()` You called `tractor.pause()` from an already cancelled scope!',
|
||||||
|
|
|
||||||
|
|
@ -31,9 +31,6 @@ from .conftest import (
|
||||||
PROMPT,
|
PROMPT,
|
||||||
_pause_msg,
|
_pause_msg,
|
||||||
)
|
)
|
||||||
from ..conftest import (
|
|
||||||
no_macos,
|
|
||||||
)
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from pexpect.exceptions import (
|
from pexpect.exceptions import (
|
||||||
|
|
@ -45,7 +42,6 @@ if TYPE_CHECKING:
|
||||||
from ..conftest import PexpectSpawner
|
from ..conftest import PexpectSpawner
|
||||||
|
|
||||||
|
|
||||||
@no_macos
|
|
||||||
def test_shield_pause(
|
def test_shield_pause(
|
||||||
spawn: PexpectSpawner,
|
spawn: PexpectSpawner,
|
||||||
):
|
):
|
||||||
|
|
@ -61,7 +57,6 @@ def test_shield_pause(
|
||||||
expect(
|
expect(
|
||||||
child,
|
child,
|
||||||
'Yo my child hanging..?',
|
'Yo my child hanging..?',
|
||||||
timeout=3,
|
|
||||||
)
|
)
|
||||||
assert_before(
|
assert_before(
|
||||||
child,
|
child,
|
||||||
|
|
@ -126,7 +121,7 @@ def test_shield_pause(
|
||||||
child.pid,
|
child.pid,
|
||||||
signal.SIGINT,
|
signal.SIGINT,
|
||||||
)
|
)
|
||||||
from tractor.runtime._supervise import _shutdown_msg
|
from tractor._supervise import _shutdown_msg
|
||||||
expect(
|
expect(
|
||||||
child,
|
child,
|
||||||
# 'Shutting down actor runtime',
|
# 'Shutting down actor runtime',
|
||||||
|
|
|
||||||
|
|
@ -1,376 +0,0 @@
|
||||||
'''
|
|
||||||
Multiaddr construction, parsing, and round-trip tests for
|
|
||||||
`tractor.discovery._multiaddr.mk_maddr()` and
|
|
||||||
`tractor.discovery._multiaddr.parse_maddr()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from pathlib import Path
|
|
||||||
from types import SimpleNamespace
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
from multiaddr import Multiaddr
|
|
||||||
|
|
||||||
from tractor.ipc._tcp import TCPAddress
|
|
||||||
from tractor.ipc._uds import UDSAddress
|
|
||||||
from tractor.discovery._multiaddr import (
|
|
||||||
mk_maddr,
|
|
||||||
parse_maddr,
|
|
||||||
parse_endpoints,
|
|
||||||
_tpt_proto_to_maddr,
|
|
||||||
_maddr_to_tpt_proto,
|
|
||||||
)
|
|
||||||
from tractor.discovery._addr import wrap_address
|
|
||||||
|
|
||||||
|
|
||||||
def test_tpt_proto_to_maddr_mapping():
|
|
||||||
'''
|
|
||||||
`_tpt_proto_to_maddr` maps all supported `proto_key`
|
|
||||||
values to their correct multiaddr protocol names.
|
|
||||||
|
|
||||||
'''
|
|
||||||
assert _tpt_proto_to_maddr['tcp'] == 'tcp'
|
|
||||||
assert _tpt_proto_to_maddr['uds'] == 'unix'
|
|
||||||
assert len(_tpt_proto_to_maddr) == 2
|
|
||||||
|
|
||||||
|
|
||||||
def test_mk_maddr_tcp_ipv4():
|
|
||||||
'''
|
|
||||||
`mk_maddr()` on a `TCPAddress` with an IPv4 host
|
|
||||||
produces the correct `/ip4/<host>/tcp/<port>` multiaddr.
|
|
||||||
|
|
||||||
'''
|
|
||||||
addr = TCPAddress('127.0.0.1', 1234)
|
|
||||||
result: Multiaddr = mk_maddr(addr)
|
|
||||||
|
|
||||||
assert isinstance(result, Multiaddr)
|
|
||||||
assert str(result) == '/ip4/127.0.0.1/tcp/1234'
|
|
||||||
|
|
||||||
protos = result.protocols()
|
|
||||||
assert protos[0].name == 'ip4'
|
|
||||||
assert protos[1].name == 'tcp'
|
|
||||||
|
|
||||||
assert result.value_for_protocol('ip4') == '127.0.0.1'
|
|
||||||
assert result.value_for_protocol('tcp') == '1234'
|
|
||||||
|
|
||||||
|
|
||||||
def test_mk_maddr_tcp_ipv6():
|
|
||||||
'''
|
|
||||||
`mk_maddr()` on a `TCPAddress` with an IPv6 host
|
|
||||||
produces the correct `/ip6/<host>/tcp/<port>` multiaddr.
|
|
||||||
|
|
||||||
'''
|
|
||||||
addr = TCPAddress('::1', 5678)
|
|
||||||
result: Multiaddr = mk_maddr(addr)
|
|
||||||
|
|
||||||
assert str(result) == '/ip6/::1/tcp/5678'
|
|
||||||
|
|
||||||
protos = result.protocols()
|
|
||||||
assert protos[0].name == 'ip6'
|
|
||||||
assert protos[1].name == 'tcp'
|
|
||||||
|
|
||||||
|
|
||||||
def test_mk_maddr_uds():
|
|
||||||
'''
|
|
||||||
`mk_maddr()` on a `UDSAddress` produces a `/unix/<path>`
|
|
||||||
multiaddr containing the full socket path.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# NOTE, use an absolute `filedir` to match real runtime
|
|
||||||
# UDS paths; `mk_maddr()` strips the leading `/` to avoid
|
|
||||||
# the double-slash `/unix//run/..` that py-multiaddr
|
|
||||||
# rejects as "empty protocol path".
|
|
||||||
filedir = '/tmp/tractor_test'
|
|
||||||
filename = 'test_sock.sock'
|
|
||||||
addr = UDSAddress(
|
|
||||||
filedir=filedir,
|
|
||||||
filename=filename,
|
|
||||||
)
|
|
||||||
result: Multiaddr = mk_maddr(addr)
|
|
||||||
|
|
||||||
assert isinstance(result, Multiaddr)
|
|
||||||
|
|
||||||
result_str: str = str(result)
|
|
||||||
assert result_str.startswith('/unix/')
|
|
||||||
# verify the leading `/` was stripped to avoid double-slash
|
|
||||||
assert '/unix/tmp/tractor_test/' in result_str
|
|
||||||
|
|
||||||
sockpath_rel: str = str(
|
|
||||||
Path(filedir) / filename
|
|
||||||
).lstrip('/')
|
|
||||||
unix_val: str = result.value_for_protocol('unix')
|
|
||||||
assert unix_val.endswith(sockpath_rel)
|
|
||||||
|
|
||||||
|
|
||||||
def test_mk_maddr_unsupported_proto_key():
|
|
||||||
'''
|
|
||||||
`mk_maddr()` raises `ValueError` for an unsupported
|
|
||||||
`proto_key`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
fake_addr = SimpleNamespace(proto_key='quic')
|
|
||||||
with pytest.raises(
|
|
||||||
ValueError,
|
|
||||||
match='Unsupported proto_key',
|
|
||||||
):
|
|
||||||
mk_maddr(fake_addr)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'addr',
|
|
||||||
[
|
|
||||||
pytest.param(
|
|
||||||
TCPAddress('127.0.0.1', 9999),
|
|
||||||
id='tcp-ipv4',
|
|
||||||
),
|
|
||||||
pytest.param(
|
|
||||||
UDSAddress(
|
|
||||||
filedir='/tmp/tractor_rt',
|
|
||||||
filename='roundtrip.sock',
|
|
||||||
),
|
|
||||||
id='uds',
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
def test_mk_maddr_roundtrip(addr):
|
|
||||||
'''
|
|
||||||
`mk_maddr()` output is valid multiaddr syntax that the
|
|
||||||
library can re-parse back into an equivalent `Multiaddr`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
maddr: Multiaddr = mk_maddr(addr)
|
|
||||||
reparsed = Multiaddr(str(maddr))
|
|
||||||
|
|
||||||
assert reparsed == maddr
|
|
||||||
assert str(reparsed) == str(maddr)
|
|
||||||
|
|
||||||
|
|
||||||
# ------ parse_maddr() tests ------
|
|
||||||
|
|
||||||
def test_maddr_to_tpt_proto_mapping():
|
|
||||||
'''
|
|
||||||
`_maddr_to_tpt_proto` is the exact inverse of
|
|
||||||
`_tpt_proto_to_maddr`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
assert _maddr_to_tpt_proto == {
|
|
||||||
'tcp': 'tcp',
|
|
||||||
'unix': 'uds',
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_maddr_tcp_ipv4():
|
|
||||||
'''
|
|
||||||
`parse_maddr()` on an IPv4 TCP multiaddr string
|
|
||||||
produce a `TCPAddress` with the correct host and port.
|
|
||||||
|
|
||||||
'''
|
|
||||||
result = parse_maddr('/ip4/127.0.0.1/tcp/1234')
|
|
||||||
|
|
||||||
assert isinstance(result, TCPAddress)
|
|
||||||
assert result.unwrap() == ('127.0.0.1', 1234)
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_maddr_tcp_ipv6():
|
|
||||||
'''
|
|
||||||
`parse_maddr()` on an IPv6 TCP multiaddr string
|
|
||||||
produce a `TCPAddress` with the correct host and port.
|
|
||||||
|
|
||||||
'''
|
|
||||||
result = parse_maddr('/ip6/::1/tcp/5678')
|
|
||||||
|
|
||||||
assert isinstance(result, TCPAddress)
|
|
||||||
assert result.unwrap() == ('::1', 5678)
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_maddr_uds():
|
|
||||||
'''
|
|
||||||
`parse_maddr()` on a `/unix/...` multiaddr string
|
|
||||||
produce a `UDSAddress` with the correct dir and filename,
|
|
||||||
preserving absolute path semantics.
|
|
||||||
|
|
||||||
'''
|
|
||||||
result = parse_maddr('/unix/tmp/tractor_test/test.sock')
|
|
||||||
|
|
||||||
assert isinstance(result, UDSAddress)
|
|
||||||
filedir, filename = result.unwrap()
|
|
||||||
assert filename == 'test.sock'
|
|
||||||
assert str(filedir) == '/tmp/tractor_test'
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_maddr_unsupported():
|
|
||||||
'''
|
|
||||||
`parse_maddr()` raise `ValueError` for an unsupported
|
|
||||||
protocol combination like UDP.
|
|
||||||
|
|
||||||
'''
|
|
||||||
with pytest.raises(
|
|
||||||
ValueError,
|
|
||||||
match='Unsupported multiaddr protocol combo',
|
|
||||||
):
|
|
||||||
parse_maddr('/ip4/127.0.0.1/udp/1234')
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'addr',
|
|
||||||
[
|
|
||||||
pytest.param(
|
|
||||||
TCPAddress('127.0.0.1', 9999),
|
|
||||||
id='tcp-ipv4',
|
|
||||||
),
|
|
||||||
pytest.param(
|
|
||||||
UDSAddress(
|
|
||||||
filedir='/tmp/tractor_rt',
|
|
||||||
filename='roundtrip.sock',
|
|
||||||
),
|
|
||||||
id='uds',
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
def test_parse_maddr_roundtrip(addr):
|
|
||||||
'''
|
|
||||||
Full round-trip: `addr -> mk_maddr -> str -> parse_maddr`
|
|
||||||
produce an `Address` whose `.unwrap()` matches the original.
|
|
||||||
|
|
||||||
'''
|
|
||||||
maddr: Multiaddr = mk_maddr(addr)
|
|
||||||
maddr_str: str = str(maddr)
|
|
||||||
parsed = parse_maddr(maddr_str)
|
|
||||||
|
|
||||||
assert type(parsed) is type(addr)
|
|
||||||
assert parsed.unwrap() == addr.unwrap()
|
|
||||||
|
|
||||||
|
|
||||||
def test_wrap_address_maddr_str():
|
|
||||||
'''
|
|
||||||
`wrap_address()` accept a multiaddr-format string and
|
|
||||||
return the correct `Address` type.
|
|
||||||
|
|
||||||
'''
|
|
||||||
result = wrap_address('/ip4/127.0.0.1/tcp/9999')
|
|
||||||
|
|
||||||
assert isinstance(result, TCPAddress)
|
|
||||||
assert result.unwrap() == ('127.0.0.1', 9999)
|
|
||||||
|
|
||||||
|
|
||||||
# ------ parse_endpoints() tests ------
|
|
||||||
|
|
||||||
def test_parse_endpoints_tcp_only():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` with a single TCP maddr per actor
|
|
||||||
produce the correct `TCPAddress` instances.
|
|
||||||
|
|
||||||
'''
|
|
||||||
table = {
|
|
||||||
'registry': ['/ip4/127.0.0.1/tcp/1616'],
|
|
||||||
'data_feed': ['/ip4/0.0.0.0/tcp/5555'],
|
|
||||||
}
|
|
||||||
result = parse_endpoints(table)
|
|
||||||
|
|
||||||
assert set(result.keys()) == {'registry', 'data_feed'}
|
|
||||||
|
|
||||||
reg_addr = result['registry'][0]
|
|
||||||
assert isinstance(reg_addr, TCPAddress)
|
|
||||||
assert reg_addr.unwrap() == ('127.0.0.1', 1616)
|
|
||||||
|
|
||||||
feed_addr = result['data_feed'][0]
|
|
||||||
assert isinstance(feed_addr, TCPAddress)
|
|
||||||
assert feed_addr.unwrap() == ('0.0.0.0', 5555)
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_endpoints_mixed_tpts():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` with both TCP and UDS maddrs for
|
|
||||||
the same actor produce the correct mixed `Address` list.
|
|
||||||
|
|
||||||
'''
|
|
||||||
table = {
|
|
||||||
'broker': [
|
|
||||||
'/ip4/127.0.0.1/tcp/4040',
|
|
||||||
'/unix/tmp/tractor/broker.sock',
|
|
||||||
],
|
|
||||||
}
|
|
||||||
result = parse_endpoints(table)
|
|
||||||
addrs = result['broker']
|
|
||||||
|
|
||||||
assert len(addrs) == 2
|
|
||||||
assert isinstance(addrs[0], TCPAddress)
|
|
||||||
assert addrs[0].unwrap() == ('127.0.0.1', 4040)
|
|
||||||
|
|
||||||
assert isinstance(addrs[1], UDSAddress)
|
|
||||||
filedir, filename = addrs[1].unwrap()
|
|
||||||
assert filename == 'broker.sock'
|
|
||||||
assert str(filedir) == '/tmp/tractor'
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_endpoints_unwrapped_tuples():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` accept raw `(host, port)` tuples
|
|
||||||
and wrap them as `TCPAddress`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
table = {
|
|
||||||
'ems': [('127.0.0.1', 6666)],
|
|
||||||
}
|
|
||||||
result = parse_endpoints(table)
|
|
||||||
|
|
||||||
addr = result['ems'][0]
|
|
||||||
assert isinstance(addr, TCPAddress)
|
|
||||||
assert addr.unwrap() == ('127.0.0.1', 6666)
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_endpoints_mixed_str_and_tuple():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` accept a mix of maddr strings and
|
|
||||||
raw tuples in the same actor entry list.
|
|
||||||
|
|
||||||
'''
|
|
||||||
table = {
|
|
||||||
'quoter': [
|
|
||||||
'/ip4/127.0.0.1/tcp/7777',
|
|
||||||
('127.0.0.1', 8888),
|
|
||||||
],
|
|
||||||
}
|
|
||||||
result = parse_endpoints(table)
|
|
||||||
addrs = result['quoter']
|
|
||||||
|
|
||||||
assert len(addrs) == 2
|
|
||||||
assert isinstance(addrs[0], TCPAddress)
|
|
||||||
assert addrs[0].unwrap() == ('127.0.0.1', 7777)
|
|
||||||
|
|
||||||
assert isinstance(addrs[1], TCPAddress)
|
|
||||||
assert addrs[1].unwrap() == ('127.0.0.1', 8888)
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_endpoints_unsupported_proto():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` raise `ValueError` when a maddr
|
|
||||||
string uses an unsupported protocol like `/udp/`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
table = {
|
|
||||||
'bad_actor': ['/ip4/127.0.0.1/udp/9999'],
|
|
||||||
}
|
|
||||||
with pytest.raises(
|
|
||||||
ValueError,
|
|
||||||
match='Unsupported multiaddr protocol combo',
|
|
||||||
):
|
|
||||||
parse_endpoints(table)
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_endpoints_empty_table():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` on an empty table return an empty
|
|
||||||
dict.
|
|
||||||
|
|
||||||
'''
|
|
||||||
assert parse_endpoints({}) == {}
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_endpoints_empty_actor_list():
|
|
||||||
'''
|
|
||||||
`parse_endpoints()` with an actor mapped to an empty
|
|
||||||
list preserve the key with an empty list value.
|
|
||||||
|
|
||||||
'''
|
|
||||||
result = parse_endpoints({'x': []})
|
|
||||||
assert result == {'x': []}
|
|
||||||
|
|
@ -1,345 +0,0 @@
|
||||||
'''
|
|
||||||
`open_root_actor(tpt_bind_addrs=...)` test suite.
|
|
||||||
|
|
||||||
Verify all three runtime code paths for explicit IPC-server
|
|
||||||
bind-address selection in `_root.py`:
|
|
||||||
|
|
||||||
1. Non-registrar, no explicit bind -> random addrs from registry proto
|
|
||||||
2. Registrar, no explicit bind -> binds to registry_addrs
|
|
||||||
3. Explicit bind given -> wraps via `wrap_address()` and uses them
|
|
||||||
|
|
||||||
'''
|
|
||||||
import pytest
|
|
||||||
import trio
|
|
||||||
import tractor
|
|
||||||
from tractor.discovery._addr import (
|
|
||||||
wrap_address,
|
|
||||||
)
|
|
||||||
from tractor.discovery._multiaddr import mk_maddr
|
|
||||||
from tractor._testing.addr import get_rando_addr
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# helpers
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def _bound_bindspaces(
|
|
||||||
actor: tractor.Actor,
|
|
||||||
) -> set[str]:
|
|
||||||
'''
|
|
||||||
Collect the set of bindspace strings from the actor's
|
|
||||||
currently bound IPC-server accept addresses.
|
|
||||||
|
|
||||||
'''
|
|
||||||
return {
|
|
||||||
wrap_address(a).bindspace
|
|
||||||
for a in actor.accept_addrs
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def _bound_wrapped(
|
|
||||||
actor: tractor.Actor,
|
|
||||||
) -> list:
|
|
||||||
'''
|
|
||||||
Return the actor's accept addrs as wrapped `Address` objects.
|
|
||||||
|
|
||||||
'''
|
|
||||||
return [
|
|
||||||
wrap_address(a)
|
|
||||||
for a in actor.accept_addrs
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# 1) Registrar + explicit tpt_bind_addrs
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'addr_combo',
|
|
||||||
[
|
|
||||||
'bind-eq-reg',
|
|
||||||
'bind-subset-reg',
|
|
||||||
'bind-disjoint-reg',
|
|
||||||
],
|
|
||||||
ids=lambda v: v,
|
|
||||||
)
|
|
||||||
def test_registrar_root_tpt_bind_addrs(
|
|
||||||
reg_addr: tuple,
|
|
||||||
tpt_proto: str,
|
|
||||||
debug_mode: bool,
|
|
||||||
addr_combo: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Registrar root-actor with explicit `tpt_bind_addrs`:
|
|
||||||
bound set must include all registry + all bind addr bindspaces
|
|
||||||
(merge behavior).
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_wrapped = wrap_address(reg_addr)
|
|
||||||
|
|
||||||
if addr_combo == 'bind-eq-reg':
|
|
||||||
bind_addrs = [reg_addr]
|
|
||||||
# extra secondary reg addr for subset test
|
|
||||||
extra_reg = []
|
|
||||||
|
|
||||||
elif addr_combo == 'bind-subset-reg':
|
|
||||||
second_reg = get_rando_addr(tpt_proto)
|
|
||||||
bind_addrs = [reg_addr]
|
|
||||||
extra_reg = [second_reg]
|
|
||||||
|
|
||||||
elif addr_combo == 'bind-disjoint-reg':
|
|
||||||
# port=0 on same host -> completely different addr
|
|
||||||
rando = wrap_address(reg_addr).get_random(
|
|
||||||
bindspace=reg_wrapped.bindspace,
|
|
||||||
)
|
|
||||||
bind_addrs = [rando.unwrap()]
|
|
||||||
extra_reg = []
|
|
||||||
|
|
||||||
all_reg = [reg_addr] + extra_reg
|
|
||||||
|
|
||||||
async def _main():
|
|
||||||
async with tractor.open_root_actor(
|
|
||||||
registry_addrs=all_reg,
|
|
||||||
tpt_bind_addrs=bind_addrs,
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
assert actor.is_registrar
|
|
||||||
|
|
||||||
bound = actor.accept_addrs
|
|
||||||
bound_bs = _bound_bindspaces(actor)
|
|
||||||
|
|
||||||
# all registry bindspaces must appear in bound set
|
|
||||||
for ra in all_reg:
|
|
||||||
assert wrap_address(ra).bindspace in bound_bs
|
|
||||||
|
|
||||||
# all bind-addr bindspaces must appear
|
|
||||||
for ba in bind_addrs:
|
|
||||||
assert wrap_address(ba).bindspace in bound_bs
|
|
||||||
|
|
||||||
# registry addr must appear verbatim in bound
|
|
||||||
# (after wrapping both sides for comparison)
|
|
||||||
bound_w = _bound_wrapped(actor)
|
|
||||||
assert reg_wrapped in bound_w
|
|
||||||
|
|
||||||
if addr_combo == 'bind-disjoint-reg':
|
|
||||||
assert len(bound) >= 2
|
|
||||||
|
|
||||||
trio.run(_main)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'addr_combo',
|
|
||||||
[
|
|
||||||
'bind-same-bindspace',
|
|
||||||
'bind-disjoint',
|
|
||||||
],
|
|
||||||
ids=lambda v: v,
|
|
||||||
)
|
|
||||||
def test_non_registrar_root_tpt_bind_addrs(
|
|
||||||
daemon,
|
|
||||||
reg_addr: tuple,
|
|
||||||
tpt_proto: str,
|
|
||||||
debug_mode: bool,
|
|
||||||
addr_combo: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Non-registrar root with explicit `tpt_bind_addrs`:
|
|
||||||
bound set must exactly match the requested bind addrs
|
|
||||||
(no merge with registry).
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_wrapped = wrap_address(reg_addr)
|
|
||||||
|
|
||||||
if addr_combo == 'bind-same-bindspace':
|
|
||||||
# same bindspace as reg but port=0 so we get a random port
|
|
||||||
rando = reg_wrapped.get_random(
|
|
||||||
bindspace=reg_wrapped.bindspace,
|
|
||||||
)
|
|
||||||
bind_addrs = [rando.unwrap()]
|
|
||||||
|
|
||||||
elif addr_combo == 'bind-disjoint':
|
|
||||||
rando = reg_wrapped.get_random(
|
|
||||||
bindspace=reg_wrapped.bindspace,
|
|
||||||
)
|
|
||||||
bind_addrs = [rando.unwrap()]
|
|
||||||
|
|
||||||
async def _main():
|
|
||||||
async with tractor.open_root_actor(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
tpt_bind_addrs=bind_addrs,
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
assert not actor.is_registrar
|
|
||||||
|
|
||||||
bound = actor.accept_addrs
|
|
||||||
assert len(bound) == len(bind_addrs)
|
|
||||||
|
|
||||||
# bindspaces must match
|
|
||||||
bound_bs = _bound_bindspaces(actor)
|
|
||||||
for ba in bind_addrs:
|
|
||||||
assert wrap_address(ba).bindspace in bound_bs
|
|
||||||
|
|
||||||
# TCP port=0 should resolve to a real port
|
|
||||||
for uw_addr in bound:
|
|
||||||
w = wrap_address(uw_addr)
|
|
||||||
if w.proto_key == 'tcp':
|
|
||||||
_host, port = uw_addr
|
|
||||||
assert port > 0
|
|
||||||
|
|
||||||
trio.run(_main)
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# 3) Non-registrar, default random bind (baseline)
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def test_non_registrar_default_random_bind(
|
|
||||||
daemon,
|
|
||||||
reg_addr: tuple,
|
|
||||||
debug_mode: bool,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Baseline: no `tpt_bind_addrs`, daemon running.
|
|
||||||
Bound bindspace matches registry bindspace,
|
|
||||||
but bound addr differs from reg_addr (random).
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_wrapped = wrap_address(reg_addr)
|
|
||||||
|
|
||||||
async def _main():
|
|
||||||
async with tractor.open_root_actor(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
assert not actor.is_registrar
|
|
||||||
|
|
||||||
bound_bs = _bound_bindspaces(actor)
|
|
||||||
assert reg_wrapped.bindspace in bound_bs
|
|
||||||
|
|
||||||
# bound addr should differ from the registry addr
|
|
||||||
# (the runtime picks a random port/path)
|
|
||||||
bound_w = _bound_wrapped(actor)
|
|
||||||
assert reg_wrapped not in bound_w
|
|
||||||
|
|
||||||
trio.run(_main)
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# 4) Multiaddr string input
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def test_tpt_bind_addrs_as_maddr_str(
|
|
||||||
reg_addr: tuple,
|
|
||||||
debug_mode: bool,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Pass multiaddr strings as `tpt_bind_addrs`.
|
|
||||||
Runtime should parse and bind successfully.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_wrapped = wrap_address(reg_addr)
|
|
||||||
# build a port-0 / random maddr string for binding
|
|
||||||
rando = reg_wrapped.get_random(
|
|
||||||
bindspace=reg_wrapped.bindspace,
|
|
||||||
)
|
|
||||||
maddr_str: str = str(mk_maddr(rando))
|
|
||||||
|
|
||||||
async def _main():
|
|
||||||
async with tractor.open_root_actor(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
tpt_bind_addrs=[maddr_str],
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
assert actor.is_registrar
|
|
||||||
|
|
||||||
for uw_addr in actor.accept_addrs:
|
|
||||||
w = wrap_address(uw_addr)
|
|
||||||
if w.proto_key == 'tcp':
|
|
||||||
_host, port = uw_addr
|
|
||||||
assert port > 0
|
|
||||||
|
|
||||||
trio.run(_main)
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# 5) Registrar merge produces union of binds
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def test_registrar_merge_binds_union(
|
|
||||||
tpt_proto: str,
|
|
||||||
debug_mode: bool,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Registrar + disjoint bind addr: bound set must include
|
|
||||||
both registry and explicit bind addresses.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_addr = get_rando_addr(tpt_proto)
|
|
||||||
reg_wrapped = wrap_address(reg_addr)
|
|
||||||
|
|
||||||
rando = reg_wrapped.get_random(
|
|
||||||
bindspace=reg_wrapped.bindspace,
|
|
||||||
)
|
|
||||||
bind_addrs = [rando.unwrap()]
|
|
||||||
|
|
||||||
# NOTE: for UDS, `get_random()` produces the same
|
|
||||||
# filename for the same pid+actor-state, so the
|
|
||||||
# "disjoint" premise only holds when the addrs
|
|
||||||
# actually differ (always true for TCP, may
|
|
||||||
# collide for UDS).
|
|
||||||
expect_disjoint: bool = (
|
|
||||||
tuple(reg_addr) != rando.unwrap()
|
|
||||||
)
|
|
||||||
|
|
||||||
async def _main():
|
|
||||||
async with tractor.open_root_actor(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
tpt_bind_addrs=bind_addrs,
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
assert actor.is_registrar
|
|
||||||
|
|
||||||
bound = actor.accept_addrs
|
|
||||||
bound_w = _bound_wrapped(actor)
|
|
||||||
|
|
||||||
if expect_disjoint:
|
|
||||||
# must have at least 2 (registry + bind)
|
|
||||||
assert len(bound) >= 2
|
|
||||||
|
|
||||||
# registry addr must appear in bound set
|
|
||||||
assert reg_wrapped in bound_w
|
|
||||||
|
|
||||||
trio.run(_main)
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# 6) open_nursery forwards tpt_bind_addrs
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
def test_open_nursery_forwards_tpt_bind_addrs(
|
|
||||||
reg_addr: tuple,
|
|
||||||
debug_mode: bool,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
`open_nursery(tpt_bind_addrs=...)` forwards through
|
|
||||||
`**kwargs` to `open_root_actor()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_wrapped = wrap_address(reg_addr)
|
|
||||||
rando = reg_wrapped.get_random(
|
|
||||||
bindspace=reg_wrapped.bindspace,
|
|
||||||
)
|
|
||||||
bind_addrs = [rando.unwrap()]
|
|
||||||
|
|
||||||
async def _main():
|
|
||||||
async with tractor.open_nursery(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
tpt_bind_addrs=bind_addrs,
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
bound_bs = _bound_bindspaces(actor)
|
|
||||||
|
|
||||||
for ba in bind_addrs:
|
|
||||||
assert wrap_address(ba).bindspace in bound_bs
|
|
||||||
|
|
||||||
trio.run(_main)
|
|
||||||
|
|
@ -8,16 +8,17 @@ from pathlib import Path
|
||||||
import pytest
|
import pytest
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import Actor
|
from tractor import (
|
||||||
from tractor.runtime import _state
|
Actor,
|
||||||
from tractor.discovery import _addr
|
_state,
|
||||||
|
_addr,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def bindspace_dir_str() -> str:
|
def bindspace_dir_str() -> str:
|
||||||
|
|
||||||
from tractor.runtime._state import get_rt_dir
|
rt_dir: Path = tractor._state.get_rt_dir()
|
||||||
rt_dir: Path = get_rt_dir()
|
|
||||||
bs_dir: Path = rt_dir / 'doggy'
|
bs_dir: Path = rt_dir / 'doggy'
|
||||||
bs_dir_str: str = str(bs_dir)
|
bs_dir_str: str = str(bs_dir)
|
||||||
assert not bs_dir.is_dir()
|
assert not bs_dir.is_dir()
|
||||||
|
|
|
||||||
|
|
@ -13,9 +13,9 @@ from tractor import (
|
||||||
Portal,
|
Portal,
|
||||||
ipc,
|
ipc,
|
||||||
msg,
|
msg,
|
||||||
|
_state,
|
||||||
|
_addr,
|
||||||
)
|
)
|
||||||
from tractor.runtime import _state
|
|
||||||
from tractor.discovery import _addr
|
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
async def chk_tpts(
|
async def chk_tpts(
|
||||||
|
|
@ -62,13 +62,6 @@ def test_root_passes_tpt_to_sub(
|
||||||
reg_addr: tuple,
|
reg_addr: tuple,
|
||||||
debug_mode: bool,
|
debug_mode: bool,
|
||||||
):
|
):
|
||||||
# XXX NOTE, the `reg_addr` addr won't be the same type as the
|
|
||||||
# `tpt_proto_key` would deliver here unless you pass `--tpt-proto
|
|
||||||
# <tpt_proto_key>` on the CLI.
|
|
||||||
#
|
|
||||||
# if tpt_proto_key == 'uds':
|
|
||||||
# breakpoint()
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
enable_transports=[tpt_proto_key],
|
enable_transports=[tpt_proto_key],
|
||||||
|
|
|
||||||
|
|
@ -1,4 +0,0 @@
|
||||||
'''
|
|
||||||
`tractor.msg.*` sub-sys test suite.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
@ -1,4 +0,0 @@
|
||||||
'''
|
|
||||||
`tractor.msg.*` test sub-pkg conf.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
@ -1,240 +0,0 @@
|
||||||
'''
|
|
||||||
Unit tests for `tractor.msg.pretty_struct`
|
|
||||||
private-field filtering in `pformat()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from tractor.msg.pretty_struct import (
|
|
||||||
Struct,
|
|
||||||
pformat,
|
|
||||||
iter_struct_ppfmt_lines,
|
|
||||||
)
|
|
||||||
from tractor.msg._codec import (
|
|
||||||
MsgDec,
|
|
||||||
mk_dec,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# ------ test struct definitions ------ #
|
|
||||||
|
|
||||||
class PublicOnly(Struct):
|
|
||||||
'''
|
|
||||||
All-public fields for baseline testing.
|
|
||||||
|
|
||||||
'''
|
|
||||||
name: str = 'alice'
|
|
||||||
age: int = 30
|
|
||||||
|
|
||||||
|
|
||||||
class PrivateOnly(Struct):
|
|
||||||
'''
|
|
||||||
Only underscore-prefixed (private) fields.
|
|
||||||
|
|
||||||
'''
|
|
||||||
_secret: str = 'hidden'
|
|
||||||
_internal: int = 99
|
|
||||||
|
|
||||||
|
|
||||||
class MixedFields(Struct):
|
|
||||||
'''
|
|
||||||
Mix of public and private fields.
|
|
||||||
|
|
||||||
'''
|
|
||||||
name: str = 'bob'
|
|
||||||
_hidden: int = 42
|
|
||||||
value: float = 3.14
|
|
||||||
_meta: str = 'internal'
|
|
||||||
|
|
||||||
|
|
||||||
class Inner(
|
|
||||||
Struct,
|
|
||||||
frozen=True,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Frozen inner struct with a private field,
|
|
||||||
for nesting tests.
|
|
||||||
|
|
||||||
'''
|
|
||||||
x: int = 1
|
|
||||||
_secret: str = 'nope'
|
|
||||||
|
|
||||||
|
|
||||||
class Outer(Struct):
|
|
||||||
'''
|
|
||||||
Outer struct nesting an `Inner`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
label: str = 'outer'
|
|
||||||
inner: Inner = Inner()
|
|
||||||
|
|
||||||
|
|
||||||
class EmptyStruct(Struct):
|
|
||||||
'''
|
|
||||||
Struct with zero fields.
|
|
||||||
|
|
||||||
'''
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
# ------ tests ------ #
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'struct_and_expected',
|
|
||||||
[
|
|
||||||
(
|
|
||||||
PublicOnly(),
|
|
||||||
{
|
|
||||||
'shown': ['name', 'age'],
|
|
||||||
'hidden': [],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
(
|
|
||||||
MixedFields(),
|
|
||||||
{
|
|
||||||
'shown': ['name', 'value'],
|
|
||||||
'hidden': ['_hidden', '_meta'],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
(
|
|
||||||
PrivateOnly(),
|
|
||||||
{
|
|
||||||
'shown': [],
|
|
||||||
'hidden': ['_secret', '_internal'],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
],
|
|
||||||
ids=[
|
|
||||||
'all-public',
|
|
||||||
'mixed-pub-priv',
|
|
||||||
'all-private',
|
|
||||||
],
|
|
||||||
)
|
|
||||||
def test_field_visibility_in_pformat(
|
|
||||||
struct_and_expected: tuple[
|
|
||||||
Struct,
|
|
||||||
dict[str, list[str]],
|
|
||||||
],
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify `pformat()` shows public fields
|
|
||||||
and hides `_`-prefixed private fields.
|
|
||||||
|
|
||||||
'''
|
|
||||||
(
|
|
||||||
struct,
|
|
||||||
expected,
|
|
||||||
) = struct_and_expected
|
|
||||||
output: str = pformat(struct)
|
|
||||||
|
|
||||||
for field_name in expected['shown']:
|
|
||||||
assert field_name in output, (
|
|
||||||
f'{field_name!r} should appear in:\n'
|
|
||||||
f'{output}'
|
|
||||||
)
|
|
||||||
|
|
||||||
for field_name in expected['hidden']:
|
|
||||||
assert field_name not in output, (
|
|
||||||
f'{field_name!r} should NOT appear in:\n'
|
|
||||||
f'{output}'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_iter_ppfmt_lines_skips_private():
|
|
||||||
'''
|
|
||||||
Directly verify `iter_struct_ppfmt_lines()`
|
|
||||||
never yields tuples with `_`-prefixed field
|
|
||||||
names.
|
|
||||||
|
|
||||||
'''
|
|
||||||
struct = MixedFields()
|
|
||||||
lines: list[tuple[str, str]] = list(
|
|
||||||
iter_struct_ppfmt_lines(
|
|
||||||
struct,
|
|
||||||
field_indent=2,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
# should have lines for public fields only
|
|
||||||
assert len(lines) == 2
|
|
||||||
|
|
||||||
for _prefix, line_content in lines:
|
|
||||||
field_name: str = (
|
|
||||||
line_content.split(':')[0].strip()
|
|
||||||
)
|
|
||||||
assert not field_name.startswith('_'), (
|
|
||||||
f'private field leaked: {field_name!r}'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_nested_struct_filters_inner_private():
|
|
||||||
'''
|
|
||||||
Verify that nested struct's private fields
|
|
||||||
are also filtered out during recursion.
|
|
||||||
|
|
||||||
'''
|
|
||||||
outer = Outer()
|
|
||||||
output: str = pformat(outer)
|
|
||||||
|
|
||||||
# outer's public field
|
|
||||||
assert 'label' in output
|
|
||||||
|
|
||||||
# inner's public field (recursed into)
|
|
||||||
assert 'x' in output
|
|
||||||
|
|
||||||
# inner's private field must be hidden
|
|
||||||
assert '_secret' not in output
|
|
||||||
|
|
||||||
|
|
||||||
def test_empty_struct_pformat():
|
|
||||||
'''
|
|
||||||
An empty struct should produce a valid
|
|
||||||
`pformat()` result with no field lines.
|
|
||||||
|
|
||||||
'''
|
|
||||||
output: str = pformat(EmptyStruct())
|
|
||||||
assert 'EmptyStruct(' in output
|
|
||||||
assert output.rstrip().endswith(')')
|
|
||||||
|
|
||||||
# no field lines => only struct header+footer
|
|
||||||
lines: list[tuple[str, str]] = list(
|
|
||||||
iter_struct_ppfmt_lines(
|
|
||||||
EmptyStruct(),
|
|
||||||
field_indent=2,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
assert lines == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_real_msgdec_pformat_hides_private():
|
|
||||||
'''
|
|
||||||
Verify `pformat()` on a real `MsgDec`
|
|
||||||
hides the `_dec` internal field.
|
|
||||||
|
|
||||||
NOTE: `MsgDec.__repr__` is custom and does
|
|
||||||
NOT call `pformat()`, so we call it directly.
|
|
||||||
|
|
||||||
'''
|
|
||||||
dec: MsgDec = mk_dec(spec=int)
|
|
||||||
output: str = pformat(dec)
|
|
||||||
|
|
||||||
# the private `_dec` field should be filtered
|
|
||||||
assert '_dec' not in output
|
|
||||||
|
|
||||||
# but the struct type name should be present
|
|
||||||
assert 'MsgDec(' in output
|
|
||||||
|
|
||||||
|
|
||||||
def test_pformat_repr_integration():
|
|
||||||
'''
|
|
||||||
Verify that `Struct.__repr__()` (which calls
|
|
||||||
`pformat()`) also hides private fields for
|
|
||||||
custom structs that do NOT override `__repr__`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
mixed = MixedFields()
|
|
||||||
output: str = repr(mixed)
|
|
||||||
|
|
||||||
assert 'name' in output
|
|
||||||
assert 'value' in output
|
|
||||||
assert '_hidden' not in output
|
|
||||||
assert '_meta' not in output
|
|
||||||
|
|
@ -1,12 +1,7 @@
|
||||||
'''
|
"""
|
||||||
Audit the simplest inter-actor bidirectional (streaming)
|
Bidirectional streaming.
|
||||||
msg patterns.
|
|
||||||
|
|
||||||
'''
|
"""
|
||||||
from __future__ import annotations
|
|
||||||
from typing import (
|
|
||||||
Callable,
|
|
||||||
)
|
|
||||||
import pytest
|
import pytest
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
|
|
@ -14,8 +9,10 @@ import tractor
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
async def simple_rpc(
|
async def simple_rpc(
|
||||||
|
|
||||||
ctx: tractor.Context,
|
ctx: tractor.Context,
|
||||||
data: int,
|
data: int,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
Test a small ping-pong server.
|
Test a small ping-pong server.
|
||||||
|
|
@ -42,13 +39,15 @@ async def simple_rpc(
|
||||||
|
|
||||||
@tractor.context
|
@tractor.context
|
||||||
async def simple_rpc_with_forloop(
|
async def simple_rpc_with_forloop(
|
||||||
|
|
||||||
ctx: tractor.Context,
|
ctx: tractor.Context,
|
||||||
data: int,
|
data: int,
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Same as previous test but using `async for` syntax/api.
|
|
||||||
|
|
||||||
'''
|
) -> None:
|
||||||
|
"""Same as previous test but using ``async for`` syntax/api.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
# signal to parent that we're up
|
# signal to parent that we're up
|
||||||
await ctx.started(data + 1)
|
await ctx.started(data + 1)
|
||||||
|
|
||||||
|
|
@ -69,37 +68,21 @@ async def simple_rpc_with_forloop(
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
'use_async_for',
|
'use_async_for',
|
||||||
[
|
[True, False],
|
||||||
True,
|
|
||||||
False,
|
|
||||||
],
|
|
||||||
ids='use_async_for={}'.format,
|
|
||||||
)
|
)
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
'server_func',
|
'server_func',
|
||||||
[
|
[simple_rpc, simple_rpc_with_forloop],
|
||||||
simple_rpc,
|
|
||||||
simple_rpc_with_forloop,
|
|
||||||
],
|
|
||||||
ids='server_func={}'.format,
|
|
||||||
)
|
)
|
||||||
def test_simple_rpc(
|
def test_simple_rpc(server_func, use_async_for):
|
||||||
server_func: Callable,
|
|
||||||
use_async_for: bool,
|
|
||||||
loglevel: str,
|
|
||||||
debug_mode: bool,
|
|
||||||
):
|
|
||||||
'''
|
'''
|
||||||
The simplest request response pattern.
|
The simplest request response pattern.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
async def main():
|
async def main():
|
||||||
with trio.fail_after(6):
|
async with tractor.open_nursery() as n:
|
||||||
async with tractor.open_nursery(
|
|
||||||
loglevel=loglevel,
|
portal = await n.start_actor(
|
||||||
debug_mode=debug_mode,
|
|
||||||
) as an:
|
|
||||||
portal: tractor.Portal = await an.start_actor(
|
|
||||||
'rpc_server',
|
'rpc_server',
|
||||||
enable_modules=[__name__],
|
enable_modules=[__name__],
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -98,8 +98,7 @@ def test_ipc_channel_break_during_stream(
|
||||||
expect_final_exc = TransportClosed
|
expect_final_exc = TransportClosed
|
||||||
|
|
||||||
mod: ModuleType = import_path(
|
mod: ModuleType = import_path(
|
||||||
examples_dir()
|
examples_dir() / 'advanced_faults'
|
||||||
/ 'advanced_faults'
|
|
||||||
/ 'ipc_failure_during_stream.py',
|
/ 'ipc_failure_during_stream.py',
|
||||||
root=examples_dir(),
|
root=examples_dir(),
|
||||||
consider_namespace_packages=False,
|
consider_namespace_packages=False,
|
||||||
|
|
@ -114,9 +113,8 @@ def test_ipc_channel_break_during_stream(
|
||||||
if (
|
if (
|
||||||
# only expect EoC if trans is broken on the child side,
|
# only expect EoC if trans is broken on the child side,
|
||||||
ipc_break['break_child_ipc_after'] is not False
|
ipc_break['break_child_ipc_after'] is not False
|
||||||
and
|
|
||||||
# AND we tell the child to call `MsgStream.aclose()`.
|
# AND we tell the child to call `MsgStream.aclose()`.
|
||||||
pre_aclose_msgstream
|
and pre_aclose_msgstream
|
||||||
):
|
):
|
||||||
# expect_final_exc = trio.EndOfChannel
|
# expect_final_exc = trio.EndOfChannel
|
||||||
# ^XXX NOPE! XXX^ since now `.open_stream()` absorbs this
|
# ^XXX NOPE! XXX^ since now `.open_stream()` absorbs this
|
||||||
|
|
@ -146,6 +144,9 @@ def test_ipc_channel_break_during_stream(
|
||||||
# a user sending ctl-c by raising a KBI.
|
# a user sending ctl-c by raising a KBI.
|
||||||
if pre_aclose_msgstream:
|
if pre_aclose_msgstream:
|
||||||
expect_final_exc = KeyboardInterrupt
|
expect_final_exc = KeyboardInterrupt
|
||||||
|
if tpt_proto == 'uds':
|
||||||
|
expect_final_exc = TransportClosed
|
||||||
|
expect_final_cause = trio.BrokenResourceError
|
||||||
|
|
||||||
# XXX OLD XXX
|
# XXX OLD XXX
|
||||||
# if child calls `MsgStream.aclose()` then expect EoC.
|
# if child calls `MsgStream.aclose()` then expect EoC.
|
||||||
|
|
@ -159,13 +160,16 @@ def test_ipc_channel_break_during_stream(
|
||||||
ipc_break['break_child_ipc_after'] is not False
|
ipc_break['break_child_ipc_after'] is not False
|
||||||
and (
|
and (
|
||||||
ipc_break['break_parent_ipc_after']
|
ipc_break['break_parent_ipc_after']
|
||||||
>
|
> ipc_break['break_child_ipc_after']
|
||||||
ipc_break['break_child_ipc_after']
|
|
||||||
)
|
)
|
||||||
):
|
):
|
||||||
if pre_aclose_msgstream:
|
if pre_aclose_msgstream:
|
||||||
expect_final_exc = KeyboardInterrupt
|
expect_final_exc = KeyboardInterrupt
|
||||||
|
|
||||||
|
if tpt_proto == 'uds':
|
||||||
|
expect_final_exc = TransportClosed
|
||||||
|
expect_final_cause = trio.BrokenResourceError
|
||||||
|
|
||||||
# NOTE when the parent IPC side dies (even if the child does as well
|
# NOTE when the parent IPC side dies (even if the child does as well
|
||||||
# but the child fails BEFORE the parent) we always expect the
|
# but the child fails BEFORE the parent) we always expect the
|
||||||
# IPC layer to raise a closed-resource, NEVER do we expect
|
# IPC layer to raise a closed-resource, NEVER do we expect
|
||||||
|
|
@ -244,15 +248,8 @@ def test_ipc_channel_break_during_stream(
|
||||||
# get raw instance from pytest wrapper
|
# get raw instance from pytest wrapper
|
||||||
value = excinfo.value
|
value = excinfo.value
|
||||||
if isinstance(value, ExceptionGroup):
|
if isinstance(value, ExceptionGroup):
|
||||||
excs: tuple[Exception] = value.exceptions
|
excs = value.exceptions
|
||||||
assert (
|
assert len(excs) == 1
|
||||||
len(excs) <= 2
|
|
||||||
and
|
|
||||||
all(
|
|
||||||
isinstance(exc, TransportClosed)
|
|
||||||
for exc in excs
|
|
||||||
)
|
|
||||||
)
|
|
||||||
final_exc = excs[0]
|
final_exc = excs[0]
|
||||||
assert isinstance(final_exc, expect_final_exc)
|
assert isinstance(final_exc, expect_final_exc)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -17,8 +17,8 @@ from tractor._testing import (
|
||||||
from .conftest import no_windows
|
from .conftest import no_windows
|
||||||
|
|
||||||
|
|
||||||
_non_linux: bool = platform.system() != 'Linux'
|
def is_win():
|
||||||
_friggin_windows: bool = platform.system() == 'Windows'
|
return platform.system() == 'Windows'
|
||||||
|
|
||||||
|
|
||||||
async def assert_err(delay=0):
|
async def assert_err(delay=0):
|
||||||
|
|
@ -431,7 +431,7 @@ async def test_nested_multierrors(loglevel, start_method):
|
||||||
for subexc in err.exceptions:
|
for subexc in err.exceptions:
|
||||||
|
|
||||||
# verify first level actor errors are wrapped as remote
|
# verify first level actor errors are wrapped as remote
|
||||||
if _friggin_windows:
|
if is_win():
|
||||||
|
|
||||||
# windows is often too slow and cancellation seems
|
# windows is often too slow and cancellation seems
|
||||||
# to happen before an actor is spawned
|
# to happen before an actor is spawned
|
||||||
|
|
@ -464,7 +464,7 @@ async def test_nested_multierrors(loglevel, start_method):
|
||||||
# XXX not sure what's up with this..
|
# XXX not sure what's up with this..
|
||||||
# on windows sometimes spawning is just too slow and
|
# on windows sometimes spawning is just too slow and
|
||||||
# we get back the (sent) cancel signal instead
|
# we get back the (sent) cancel signal instead
|
||||||
if _friggin_windows:
|
if is_win():
|
||||||
if isinstance(subexc, tractor.RemoteActorError):
|
if isinstance(subexc, tractor.RemoteActorError):
|
||||||
assert subexc.boxed_type in (
|
assert subexc.boxed_type in (
|
||||||
BaseExceptionGroup,
|
BaseExceptionGroup,
|
||||||
|
|
@ -490,7 +490,7 @@ def test_cancel_via_SIGINT(
|
||||||
"""Ensure that a control-C (SIGINT) signal cancels both the parent and
|
"""Ensure that a control-C (SIGINT) signal cancels both the parent and
|
||||||
child processes in trionic fashion
|
child processes in trionic fashion
|
||||||
"""
|
"""
|
||||||
pid: int = os.getpid()
|
pid = os.getpid()
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
with trio.fail_after(2):
|
with trio.fail_after(2):
|
||||||
|
|
@ -507,31 +507,19 @@ def test_cancel_via_SIGINT(
|
||||||
|
|
||||||
@no_windows
|
@no_windows
|
||||||
def test_cancel_via_SIGINT_other_task(
|
def test_cancel_via_SIGINT_other_task(
|
||||||
loglevel: str,
|
loglevel,
|
||||||
start_method: str,
|
start_method,
|
||||||
spawn_backend: str,
|
spawn_backend,
|
||||||
):
|
):
|
||||||
'''
|
"""Ensure that a control-C (SIGINT) signal cancels both the parent
|
||||||
Ensure that a control-C (SIGINT) signal cancels both the parent
|
and child processes in trionic fashion even a subprocess is started
|
||||||
and child processes in trionic fashion even a subprocess is
|
from a seperate ``trio`` child task.
|
||||||
started from a seperate ``trio`` child task.
|
"""
|
||||||
|
pid = os.getpid()
|
||||||
'''
|
timeout: float = 2
|
||||||
from .conftest import cpu_scaling_factor
|
if is_win(): # smh
|
||||||
|
|
||||||
pid: int = os.getpid()
|
|
||||||
timeout: float = (
|
|
||||||
4 if _non_linux
|
|
||||||
else 2
|
|
||||||
)
|
|
||||||
if _friggin_windows: # smh
|
|
||||||
timeout += 1
|
timeout += 1
|
||||||
|
|
||||||
# add latency headroom for CPU freq scaling (auto-cpufreq et al.)
|
|
||||||
headroom: float = cpu_scaling_factor()
|
|
||||||
if headroom != 1.:
|
|
||||||
timeout *= headroom
|
|
||||||
|
|
||||||
async def spawn_and_sleep_forever(
|
async def spawn_and_sleep_forever(
|
||||||
task_status=trio.TASK_STATUS_IGNORED
|
task_status=trio.TASK_STATUS_IGNORED
|
||||||
):
|
):
|
||||||
|
|
@ -656,11 +644,7 @@ def test_cancel_while_childs_child_in_sync_sleep(
|
||||||
#
|
#
|
||||||
# delay = 1 # no AssertionError in eg, TooSlowError raised.
|
# delay = 1 # no AssertionError in eg, TooSlowError raised.
|
||||||
# delay = 2 # is AssertionError in eg AND no TooSlowError !?
|
# delay = 2 # is AssertionError in eg AND no TooSlowError !?
|
||||||
# is AssertionError in eg AND no _cs cancellation.
|
delay = 4 # is AssertionError in eg AND no _cs cancellation.
|
||||||
delay = (
|
|
||||||
6 if _non_linux
|
|
||||||
else 4
|
|
||||||
)
|
|
||||||
|
|
||||||
with trio.fail_after(delay) as _cs:
|
with trio.fail_after(delay) as _cs:
|
||||||
# with trio.CancelScope() as cs:
|
# with trio.CancelScope() as cs:
|
||||||
|
|
@ -712,7 +696,7 @@ def test_fast_graceful_cancel_when_spawn_task_in_soft_proc_wait_for_daemon(
|
||||||
kbi_delay = 0.5
|
kbi_delay = 0.5
|
||||||
timeout: float = 2.9
|
timeout: float = 2.9
|
||||||
|
|
||||||
if _friggin_windows: # smh
|
if is_win(): # smh
|
||||||
timeout += 1
|
timeout += 1
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
|
|
|
||||||
|
|
@ -18,15 +18,16 @@ from tractor import RemoteActorError
|
||||||
|
|
||||||
|
|
||||||
async def aio_streamer(
|
async def aio_streamer(
|
||||||
chan: tractor.to_asyncio.LinkedTaskChannel,
|
from_trio: asyncio.Queue,
|
||||||
|
to_trio: trio.abc.SendChannel,
|
||||||
) -> trio.abc.ReceiveChannel:
|
) -> trio.abc.ReceiveChannel:
|
||||||
|
|
||||||
# required first msg to sync caller
|
# required first msg to sync caller
|
||||||
chan.started_nowait(None)
|
to_trio.send_nowait(None)
|
||||||
|
|
||||||
from itertools import cycle
|
from itertools import cycle
|
||||||
for i in cycle(range(10)):
|
for i in cycle(range(10)):
|
||||||
chan.send_nowait(i)
|
to_trio.send_nowait(i)
|
||||||
await asyncio.sleep(0.01)
|
await asyncio.sleep(0.01)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -68,7 +69,7 @@ async def wrapper_mngr(
|
||||||
else:
|
else:
|
||||||
async with tractor.to_asyncio.open_channel_from(
|
async with tractor.to_asyncio.open_channel_from(
|
||||||
aio_streamer,
|
aio_streamer,
|
||||||
) as (from_aio, first):
|
) as (first, from_aio):
|
||||||
assert not first
|
assert not first
|
||||||
|
|
||||||
# cache it so next task uses broadcast receiver
|
# cache it so next task uses broadcast receiver
|
||||||
|
|
|
||||||
|
|
@ -10,19 +10,7 @@ from tractor._testing import tractor_test
|
||||||
MESSAGE = 'tractoring at full speed'
|
MESSAGE = 'tractoring at full speed'
|
||||||
|
|
||||||
|
|
||||||
def test_empty_mngrs_input_raises(
|
def test_empty_mngrs_input_raises() -> None:
|
||||||
tpt_proto: str,
|
|
||||||
) -> None:
|
|
||||||
# TODO, the `open_actor_cluster()` teardown hangs
|
|
||||||
# intermittently on UDS when `gather_contexts(mngrs=())`
|
|
||||||
# raises `ValueError` mid-setup; likely a race in the
|
|
||||||
# actor-nursery cleanup vs UDS socket shutdown. Needs
|
|
||||||
# a deeper look at `._clustering`/`._supervise` teardown
|
|
||||||
# paths with the UDS transport.
|
|
||||||
if tpt_proto == 'uds':
|
|
||||||
pytest.skip(
|
|
||||||
'actor-cluster teardown hangs intermittently on UDS'
|
|
||||||
)
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
with trio.fail_after(3):
|
with trio.fail_after(3):
|
||||||
|
|
@ -68,27 +56,13 @@ async def worker(
|
||||||
print(msg)
|
print(msg)
|
||||||
assert msg == MESSAGE
|
assert msg == MESSAGE
|
||||||
|
|
||||||
# ?TODO, does this ever cause a hang?
|
# TODO: does this ever cause a hang
|
||||||
# assert 0
|
# assert 0
|
||||||
|
|
||||||
|
|
||||||
# ?TODO, but needs a fn-scoped tpt_proto fixture..
|
|
||||||
# @pytest.mark.no_tpt('uds')
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_streaming_to_actor_cluster(
|
async def test_streaming_to_actor_cluster() -> None:
|
||||||
tpt_proto: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Open an actor "cluster" using the (experimental) `._clustering`
|
|
||||||
API and conduct standard inter-task-ctx streaming.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if tpt_proto == 'uds':
|
|
||||||
pytest.skip(
|
|
||||||
f'Test currently fails with tpt-proto={tpt_proto!r}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
with trio.fail_after(6):
|
|
||||||
async with (
|
async with (
|
||||||
open_actor_cluster(modules=[__name__]) as portals,
|
open_actor_cluster(modules=[__name__]) as portals,
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,6 @@ from itertools import count
|
||||||
import math
|
import math
|
||||||
import platform
|
import platform
|
||||||
from pprint import pformat
|
from pprint import pformat
|
||||||
import sys
|
|
||||||
from typing import (
|
from typing import (
|
||||||
Callable,
|
Callable,
|
||||||
)
|
)
|
||||||
|
|
@ -26,7 +25,7 @@ from tractor._exceptions import (
|
||||||
StreamOverrun,
|
StreamOverrun,
|
||||||
ContextCancelled,
|
ContextCancelled,
|
||||||
)
|
)
|
||||||
from tractor.runtime._state import current_ipc_ctx
|
from tractor._state import current_ipc_ctx
|
||||||
|
|
||||||
from tractor._testing import (
|
from tractor._testing import (
|
||||||
tractor_test,
|
tractor_test,
|
||||||
|
|
@ -939,14 +938,9 @@ def test_one_end_stream_not_opened(
|
||||||
|
|
||||||
'''
|
'''
|
||||||
overrunner, buf_size_increase, entrypoint = overrun_by
|
overrunner, buf_size_increase, entrypoint = overrun_by
|
||||||
from tractor.runtime._runtime import Actor
|
from tractor._runtime import Actor
|
||||||
buf_size = buf_size_increase + Actor.msg_buffer_size
|
buf_size = buf_size_increase + Actor.msg_buffer_size
|
||||||
|
|
||||||
timeout: float = (
|
|
||||||
1 if sys.platform == 'linux'
|
|
||||||
else 3
|
|
||||||
)
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
|
|
@ -956,7 +950,7 @@ def test_one_end_stream_not_opened(
|
||||||
enable_modules=[__name__],
|
enable_modules=[__name__],
|
||||||
)
|
)
|
||||||
|
|
||||||
with trio.fail_after(timeout):
|
with trio.fail_after(1):
|
||||||
async with portal.open_context(
|
async with portal.open_context(
|
||||||
entrypoint,
|
entrypoint,
|
||||||
) as (ctx, sent):
|
) as (ctx, sent):
|
||||||
|
|
|
||||||
|
|
@ -1,14 +1,11 @@
|
||||||
'''
|
"""
|
||||||
Discovery subsystem via a "registrar" actor scenarios.
|
Actor "discovery" testing
|
||||||
|
"""
|
||||||
'''
|
|
||||||
import os
|
import os
|
||||||
import signal
|
import signal
|
||||||
import platform
|
import platform
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import itertools
|
import itertools
|
||||||
import time
|
|
||||||
from typing import Callable
|
|
||||||
|
|
||||||
import psutil
|
import psutil
|
||||||
import pytest
|
import pytest
|
||||||
|
|
@ -16,17 +13,13 @@ import subprocess
|
||||||
import tractor
|
import tractor
|
||||||
from tractor.trionics import collapse_eg
|
from tractor.trionics import collapse_eg
|
||||||
from tractor._testing import tractor_test
|
from tractor._testing import tractor_test
|
||||||
from tractor.discovery._addr import wrap_address
|
|
||||||
from tractor.discovery._multiaddr import mk_maddr
|
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_reg_then_unreg(
|
async def test_reg_then_unreg(reg_addr):
|
||||||
reg_addr: tuple,
|
|
||||||
):
|
|
||||||
actor = tractor.current_actor()
|
actor = tractor.current_actor()
|
||||||
assert actor.is_registrar
|
assert actor.is_arbiter
|
||||||
assert len(actor._registry) == 1 # only self is registered
|
assert len(actor._registry) == 1 # only self is registered
|
||||||
|
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
|
|
@ -34,10 +27,10 @@ async def test_reg_then_unreg(
|
||||||
) as n:
|
) as n:
|
||||||
|
|
||||||
portal = await n.start_actor('actor', enable_modules=[__name__])
|
portal = await n.start_actor('actor', enable_modules=[__name__])
|
||||||
uid = portal.channel.aid.uid
|
uid = portal.channel.uid
|
||||||
|
|
||||||
async with tractor.get_registry(reg_addr) as aportal:
|
async with tractor.get_registry(reg_addr) as aportal:
|
||||||
# this local actor should be the registrar
|
# this local actor should be the arbiter
|
||||||
assert actor is aportal.actor
|
assert actor is aportal.actor
|
||||||
|
|
||||||
async with tractor.wait_for_actor('actor'):
|
async with tractor.wait_for_actor('actor'):
|
||||||
|
|
@ -55,49 +48,6 @@ async def test_reg_then_unreg(
|
||||||
assert not sockaddrs
|
assert not sockaddrs
|
||||||
|
|
||||||
|
|
||||||
@tractor_test
|
|
||||||
async def test_reg_then_unreg_maddr(
|
|
||||||
reg_addr: tuple,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Same as `test_reg_then_unreg` but pass the registry
|
|
||||||
address as a multiaddr string to verify `wrap_address()`
|
|
||||||
multiaddr parsing end-to-end through the runtime.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# tuple -> Address -> multiaddr string
|
|
||||||
addr_obj = wrap_address(reg_addr)
|
|
||||||
maddr_str: str = str(mk_maddr(addr_obj))
|
|
||||||
|
|
||||||
actor = tractor.current_actor()
|
|
||||||
assert actor.is_registrar
|
|
||||||
|
|
||||||
async with tractor.open_nursery(
|
|
||||||
registry_addrs=[maddr_str],
|
|
||||||
) as n:
|
|
||||||
|
|
||||||
portal = await n.start_actor(
|
|
||||||
'actor_maddr',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
uid = portal.channel.aid.uid
|
|
||||||
|
|
||||||
async with tractor.get_registry(maddr_str) as aportal:
|
|
||||||
assert actor is aportal.actor
|
|
||||||
|
|
||||||
async with tractor.wait_for_actor('actor_maddr'):
|
|
||||||
assert uid in aportal.actor._registry
|
|
||||||
sockaddrs = actor._registry[uid]
|
|
||||||
assert sockaddrs
|
|
||||||
|
|
||||||
await n.cancel()
|
|
||||||
|
|
||||||
await trio.sleep(0.1)
|
|
||||||
assert uid not in aportal.actor._registry
|
|
||||||
sockaddrs = actor._registry.get(uid)
|
|
||||||
assert not sockaddrs
|
|
||||||
|
|
||||||
|
|
||||||
the_line = 'Hi my name is {}'
|
the_line = 'Hi my name is {}'
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -132,15 +82,11 @@ async def say_hello_use_wait(
|
||||||
|
|
||||||
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize('func', [say_hello, say_hello_use_wait])
|
||||||
'func',
|
|
||||||
[say_hello,
|
|
||||||
say_hello_use_wait]
|
|
||||||
)
|
|
||||||
async def test_trynamic_trio(
|
async def test_trynamic_trio(
|
||||||
func: Callable,
|
func,
|
||||||
start_method: str,
|
start_method,
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
):
|
):
|
||||||
'''
|
'''
|
||||||
Root actor acting as the "director" and running one-shot-task-actors
|
Root actor acting as the "director" and running one-shot-task-actors
|
||||||
|
|
@ -173,10 +119,7 @@ async def stream_forever():
|
||||||
await trio.sleep(0.01)
|
await trio.sleep(0.01)
|
||||||
|
|
||||||
|
|
||||||
async def cancel(
|
async def cancel(use_signal, delay=0):
|
||||||
use_signal: bool,
|
|
||||||
delay: float = 0,
|
|
||||||
):
|
|
||||||
# hold on there sally
|
# hold on there sally
|
||||||
await trio.sleep(delay)
|
await trio.sleep(delay)
|
||||||
|
|
||||||
|
|
@ -189,17 +132,15 @@ async def cancel(
|
||||||
raise KeyboardInterrupt
|
raise KeyboardInterrupt
|
||||||
|
|
||||||
|
|
||||||
async def stream_from(portal: tractor.Portal):
|
async def stream_from(portal):
|
||||||
async with portal.open_stream_from(stream_forever) as stream:
|
async with portal.open_stream_from(stream_forever) as stream:
|
||||||
async for value in stream:
|
async for value in stream:
|
||||||
print(value)
|
print(value)
|
||||||
|
|
||||||
|
|
||||||
async def unpack_reg(
|
async def unpack_reg(actor_or_portal):
|
||||||
actor_or_portal: tractor.Portal|tractor.Actor,
|
|
||||||
):
|
|
||||||
'''
|
'''
|
||||||
Get and unpack a "registry" RPC request from the registrar
|
Get and unpack a "registry" RPC request from the "arbiter" registry
|
||||||
system.
|
system.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
|
@ -208,10 +149,7 @@ async def unpack_reg(
|
||||||
else:
|
else:
|
||||||
msg = await actor_or_portal.run_from_ns('self', 'get_registry')
|
msg = await actor_or_portal.run_from_ns('self', 'get_registry')
|
||||||
|
|
||||||
return {
|
return {tuple(key.split('.')): val for key, val in msg.items()}
|
||||||
tuple(key.split('.')): val
|
|
||||||
for key, val in msg.items()
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
async def spawn_and_check_registry(
|
async def spawn_and_check_registry(
|
||||||
|
|
@ -235,26 +173,24 @@ async def spawn_and_check_registry(
|
||||||
registry_addrs=[reg_addr],
|
registry_addrs=[reg_addr],
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
):
|
):
|
||||||
async with tractor.get_registry(
|
async with tractor.get_registry(reg_addr) as portal:
|
||||||
addr=reg_addr,
|
|
||||||
) as portal:
|
|
||||||
# runtime needs to be up to call this
|
# runtime needs to be up to call this
|
||||||
actor = tractor.current_actor()
|
actor = tractor.current_actor()
|
||||||
|
|
||||||
if remote_arbiter:
|
if remote_arbiter:
|
||||||
assert not actor.is_registrar
|
assert not actor.is_arbiter
|
||||||
|
|
||||||
if actor.is_registrar:
|
if actor.is_arbiter:
|
||||||
extra = 1 # registrar is local root actor
|
extra = 1 # arbiter is local root actor
|
||||||
get_reg = partial(unpack_reg, actor)
|
get_reg = partial(unpack_reg, actor)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
get_reg = partial(unpack_reg, portal)
|
get_reg = partial(unpack_reg, portal)
|
||||||
extra = 2 # local root actor + remote registrar
|
extra = 2 # local root actor + remote arbiter
|
||||||
|
|
||||||
# ensure current actor is registered
|
# ensure current actor is registered
|
||||||
registry: dict = await get_reg()
|
registry: dict = await get_reg()
|
||||||
assert actor.aid.uid in registry
|
assert actor.uid in registry
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with tractor.open_nursery() as an:
|
async with tractor.open_nursery() as an:
|
||||||
|
|
@ -302,35 +238,22 @@ async def spawn_and_check_registry(
|
||||||
|
|
||||||
# all subactors should have de-registered
|
# all subactors should have de-registered
|
||||||
registry = await get_reg()
|
registry = await get_reg()
|
||||||
start: float = time.time()
|
|
||||||
while (
|
|
||||||
not (len(registry) == extra)
|
|
||||||
and
|
|
||||||
(time.time() - start) < 5
|
|
||||||
):
|
|
||||||
print(
|
|
||||||
f'Waiting for remaining subs to dereg..\n'
|
|
||||||
f'{registry!r}\n'
|
|
||||||
)
|
|
||||||
await trio.sleep(0.3)
|
|
||||||
else:
|
|
||||||
assert len(registry) == extra
|
assert len(registry) == extra
|
||||||
|
assert actor.uid in registry
|
||||||
assert actor.aid.uid in registry
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize('use_signal', [False, True])
|
@pytest.mark.parametrize('use_signal', [False, True])
|
||||||
@pytest.mark.parametrize('with_streaming', [False, True])
|
@pytest.mark.parametrize('with_streaming', [False, True])
|
||||||
def test_subactors_unregister_on_cancel(
|
def test_subactors_unregister_on_cancel(
|
||||||
debug_mode: bool,
|
debug_mode: bool,
|
||||||
start_method: str,
|
start_method,
|
||||||
use_signal: bool,
|
use_signal,
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
with_streaming: bool,
|
with_streaming,
|
||||||
):
|
):
|
||||||
'''
|
'''
|
||||||
Verify that cancelling a nursery results in all subactors
|
Verify that cancelling a nursery results in all subactors
|
||||||
deregistering themselves with the registrar.
|
deregistering themselves with the arbiter.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
with pytest.raises(KeyboardInterrupt):
|
with pytest.raises(KeyboardInterrupt):
|
||||||
|
|
@ -351,17 +274,15 @@ def test_subactors_unregister_on_cancel(
|
||||||
def test_subactors_unregister_on_cancel_remote_daemon(
|
def test_subactors_unregister_on_cancel_remote_daemon(
|
||||||
daemon: subprocess.Popen,
|
daemon: subprocess.Popen,
|
||||||
debug_mode: bool,
|
debug_mode: bool,
|
||||||
start_method: str,
|
start_method,
|
||||||
use_signal: bool,
|
use_signal,
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
with_streaming: bool,
|
with_streaming,
|
||||||
):
|
):
|
||||||
'''
|
"""Verify that cancelling a nursery results in all subactors
|
||||||
Verify that cancelling a nursery results in all subactors
|
deregistering themselves with a **remote** (not in the local process
|
||||||
deregistering themselves with a **remote** (not in the local
|
tree) arbiter.
|
||||||
process tree) registrar.
|
"""
|
||||||
|
|
||||||
'''
|
|
||||||
with pytest.raises(KeyboardInterrupt):
|
with pytest.raises(KeyboardInterrupt):
|
||||||
trio.run(
|
trio.run(
|
||||||
partial(
|
partial(
|
||||||
|
|
@ -404,24 +325,20 @@ async def close_chans_before_nursery(
|
||||||
try:
|
try:
|
||||||
get_reg = partial(unpack_reg, aportal)
|
get_reg = partial(unpack_reg, aportal)
|
||||||
|
|
||||||
async with tractor.open_nursery() as an:
|
async with tractor.open_nursery() as tn:
|
||||||
portal1 = await an.start_actor(
|
portal1 = await tn.start_actor(
|
||||||
name='consumer1',
|
name='consumer1', enable_modules=[__name__])
|
||||||
enable_modules=[__name__],
|
portal2 = await tn.start_actor(
|
||||||
)
|
'consumer2', enable_modules=[__name__])
|
||||||
portal2 = await an.start_actor(
|
|
||||||
'consumer2',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
|
|
||||||
async with (
|
# TODO: compact this back as was in last commit once
|
||||||
portal1.open_stream_from(
|
# 3.9+, see https://github.com/goodboy/tractor/issues/207
|
||||||
|
async with portal1.open_stream_from(
|
||||||
stream_forever
|
stream_forever
|
||||||
) as agen1,
|
) as agen1:
|
||||||
portal2.open_stream_from(
|
async with portal2.open_stream_from(
|
||||||
stream_forever
|
stream_forever
|
||||||
) as agen2,
|
) as agen2:
|
||||||
):
|
|
||||||
async with (
|
async with (
|
||||||
collapse_eg(),
|
collapse_eg(),
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
|
|
@ -432,7 +349,7 @@ async def close_chans_before_nursery(
|
||||||
await streamer(agen2)
|
await streamer(agen2)
|
||||||
finally:
|
finally:
|
||||||
# Kill the root nursery thus resulting in
|
# Kill the root nursery thus resulting in
|
||||||
# normal registrar channel ops to fail during
|
# normal arbiter channel ops to fail during
|
||||||
# teardown. It doesn't seem like this is
|
# teardown. It doesn't seem like this is
|
||||||
# reliably triggered by an external SIGINT.
|
# reliably triggered by an external SIGINT.
|
||||||
# tractor.current_actor()._root_nursery.cancel_scope.cancel()
|
# tractor.current_actor()._root_nursery.cancel_scope.cancel()
|
||||||
|
|
@ -444,30 +361,27 @@ async def close_chans_before_nursery(
|
||||||
# also kill off channels cuz why not
|
# also kill off channels cuz why not
|
||||||
await agen1.aclose()
|
await agen1.aclose()
|
||||||
await agen2.aclose()
|
await agen2.aclose()
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
with trio.CancelScope(shield=True):
|
with trio.CancelScope(shield=True):
|
||||||
await trio.sleep(1)
|
await trio.sleep(1)
|
||||||
|
|
||||||
# all subactors should have de-registered
|
# all subactors should have de-registered
|
||||||
registry = await get_reg()
|
registry = await get_reg()
|
||||||
assert portal1.channel.aid.uid not in registry
|
assert portal1.channel.uid not in registry
|
||||||
assert portal2.channel.aid.uid not in registry
|
assert portal2.channel.uid not in registry
|
||||||
assert len(registry) == entries_at_end
|
assert len(registry) == entries_at_end
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize('use_signal', [False, True])
|
@pytest.mark.parametrize('use_signal', [False, True])
|
||||||
def test_close_channel_explicit(
|
def test_close_channel_explicit(
|
||||||
start_method: str,
|
start_method,
|
||||||
use_signal: bool,
|
use_signal,
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
):
|
):
|
||||||
'''
|
"""Verify that closing a stream explicitly and killing the actor's
|
||||||
Verify that closing a stream explicitly and killing the actor's
|
|
||||||
"root nursery" **before** the containing nursery tears down also
|
"root nursery" **before** the containing nursery tears down also
|
||||||
results in subactor(s) deregistering from the registrar.
|
results in subactor(s) deregistering from the arbiter.
|
||||||
|
"""
|
||||||
'''
|
|
||||||
with pytest.raises(KeyboardInterrupt):
|
with pytest.raises(KeyboardInterrupt):
|
||||||
trio.run(
|
trio.run(
|
||||||
partial(
|
partial(
|
||||||
|
|
@ -480,18 +394,16 @@ def test_close_channel_explicit(
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize('use_signal', [False, True])
|
@pytest.mark.parametrize('use_signal', [False, True])
|
||||||
def test_close_channel_explicit_remote_registrar(
|
def test_close_channel_explicit_remote_arbiter(
|
||||||
daemon: subprocess.Popen,
|
daemon: subprocess.Popen,
|
||||||
start_method: str,
|
start_method,
|
||||||
use_signal: bool,
|
use_signal,
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
):
|
):
|
||||||
'''
|
"""Verify that closing a stream explicitly and killing the actor's
|
||||||
Verify that closing a stream explicitly and killing the actor's
|
|
||||||
"root nursery" **before** the containing nursery tears down also
|
"root nursery" **before** the containing nursery tears down also
|
||||||
results in subactor(s) deregistering from the registrar.
|
results in subactor(s) deregistering from the arbiter.
|
||||||
|
"""
|
||||||
'''
|
|
||||||
with pytest.raises(KeyboardInterrupt):
|
with pytest.raises(KeyboardInterrupt):
|
||||||
trio.run(
|
trio.run(
|
||||||
partial(
|
partial(
|
||||||
|
|
@ -501,65 +413,3 @@ def test_close_channel_explicit_remote_registrar(
|
||||||
remote_arbiter=True,
|
remote_arbiter=True,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
|
||||||
async def kill_transport(
|
|
||||||
ctx: tractor.Context,
|
|
||||||
) -> None:
|
|
||||||
|
|
||||||
await ctx.started()
|
|
||||||
actor: tractor.Actor = tractor.current_actor()
|
|
||||||
actor.ipc_server.cancel()
|
|
||||||
await trio.sleep_forever()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# @pytest.mark.parametrize('use_signal', [False, True])
|
|
||||||
def test_stale_entry_is_deleted(
|
|
||||||
debug_mode: bool,
|
|
||||||
daemon: subprocess.Popen,
|
|
||||||
start_method: str,
|
|
||||||
reg_addr: tuple,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Ensure that when a stale entry is detected in the registrar's
|
|
||||||
table that the `find_actor()` API takes care of deleting the
|
|
||||||
stale entry and not delivering a bad portal.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async def main():
|
|
||||||
|
|
||||||
name: str = 'transport_fails_actor'
|
|
||||||
_reg_ptl: tractor.Portal
|
|
||||||
an: tractor.ActorNursery
|
|
||||||
async with (
|
|
||||||
tractor.open_nursery(
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
) as an,
|
|
||||||
tractor.get_registry(reg_addr) as _reg_ptl,
|
|
||||||
):
|
|
||||||
ptl: tractor.Portal = await an.start_actor(
|
|
||||||
name,
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
async with ptl.open_context(
|
|
||||||
kill_transport,
|
|
||||||
) as (first, ctx):
|
|
||||||
async with tractor.find_actor(
|
|
||||||
name,
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
) as maybe_portal:
|
|
||||||
# because the transitive
|
|
||||||
# `._api.maybe_open_portal()` call should
|
|
||||||
# fail and implicitly call `.delete_addr()`
|
|
||||||
assert maybe_portal is None
|
|
||||||
registry: dict = await unpack_reg(_reg_ptl)
|
|
||||||
assert ptl.chan.aid.uid not in registry
|
|
||||||
|
|
||||||
# should fail since we knocked out the IPC tpt XD
|
|
||||||
await ptl.cancel_actor()
|
|
||||||
await an.cancel()
|
|
||||||
|
|
||||||
trio.run(main)
|
|
||||||
|
|
@ -9,17 +9,12 @@ import sys
|
||||||
import subprocess
|
import subprocess
|
||||||
import platform
|
import platform
|
||||||
import shutil
|
import shutil
|
||||||
from typing import Callable
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
import tractor
|
|
||||||
from tractor._testing import (
|
from tractor._testing import (
|
||||||
examples_dir,
|
examples_dir,
|
||||||
)
|
)
|
||||||
|
|
||||||
_non_linux: bool = platform.system() != 'Linux'
|
|
||||||
_friggin_macos: bool = platform.system() == 'Darwin'
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def run_example_in_subproc(
|
def run_example_in_subproc(
|
||||||
|
|
@ -94,10 +89,8 @@ def run_example_in_subproc(
|
||||||
for f in p[2]
|
for f in p[2]
|
||||||
|
|
||||||
if (
|
if (
|
||||||
'__' not in f # ignore any pkg-mods
|
'__' not in f
|
||||||
# ignore any `__pycache__` subdir
|
and f[0] != '_'
|
||||||
and '__pycache__' not in str(p[0])
|
|
||||||
and f[0] != '_' # ignore any WIP "examplel mods"
|
|
||||||
and 'debugging' not in p[0]
|
and 'debugging' not in p[0]
|
||||||
and 'integration' not in p[0]
|
and 'integration' not in p[0]
|
||||||
and 'advanced_faults' not in p[0]
|
and 'advanced_faults' not in p[0]
|
||||||
|
|
@ -108,10 +101,8 @@ def run_example_in_subproc(
|
||||||
ids=lambda t: t[1],
|
ids=lambda t: t[1],
|
||||||
)
|
)
|
||||||
def test_example(
|
def test_example(
|
||||||
run_example_in_subproc: Callable,
|
run_example_in_subproc,
|
||||||
example_script: str,
|
example_script,
|
||||||
test_log: tractor.log.StackLevelAdapter,
|
|
||||||
ci_env: bool,
|
|
||||||
):
|
):
|
||||||
'''
|
'''
|
||||||
Load and run scripts from this repo's ``examples/`` dir as a user
|
Load and run scripts from this repo's ``examples/`` dir as a user
|
||||||
|
|
@ -125,39 +116,9 @@ def test_example(
|
||||||
'''
|
'''
|
||||||
ex_file: str = os.path.join(*example_script)
|
ex_file: str = os.path.join(*example_script)
|
||||||
|
|
||||||
if (
|
if 'rpc_bidir_streaming' in ex_file and sys.version_info < (3, 9):
|
||||||
'rpc_bidir_streaming' in ex_file
|
|
||||||
and
|
|
||||||
sys.version_info < (3, 9)
|
|
||||||
):
|
|
||||||
pytest.skip("2-way streaming example requires py3.9 async with syntax")
|
pytest.skip("2-way streaming example requires py3.9 async with syntax")
|
||||||
|
|
||||||
if (
|
|
||||||
'full_fledged_streaming_service' in ex_file
|
|
||||||
and
|
|
||||||
_friggin_macos
|
|
||||||
and
|
|
||||||
ci_env
|
|
||||||
):
|
|
||||||
pytest.skip(
|
|
||||||
'Streaming example is too flaky in CI\n'
|
|
||||||
'AND their competitor runs this CI service..\n'
|
|
||||||
'This test does run just fine "in person" however..'
|
|
||||||
)
|
|
||||||
|
|
||||||
from .conftest import cpu_scaling_factor
|
|
||||||
|
|
||||||
timeout: float = (
|
|
||||||
60
|
|
||||||
if ci_env and _non_linux
|
|
||||||
else 16
|
|
||||||
)
|
|
||||||
|
|
||||||
# add latency headroom for CPU freq scaling (auto-cpufreq et al.)
|
|
||||||
headroom: float = cpu_scaling_factor()
|
|
||||||
if headroom != 1.:
|
|
||||||
timeout *= headroom
|
|
||||||
|
|
||||||
with open(ex_file, 'r') as ex:
|
with open(ex_file, 'r') as ex:
|
||||||
code = ex.read()
|
code = ex.read()
|
||||||
|
|
||||||
|
|
@ -165,12 +126,9 @@ def test_example(
|
||||||
err = None
|
err = None
|
||||||
try:
|
try:
|
||||||
if not proc.poll():
|
if not proc.poll():
|
||||||
_, err = proc.communicate(timeout=timeout)
|
_, err = proc.communicate(timeout=15)
|
||||||
|
|
||||||
except subprocess.TimeoutExpired as e:
|
except subprocess.TimeoutExpired as e:
|
||||||
test_log.exception(
|
|
||||||
f'Example failed to finish within {timeout}s ??\n'
|
|
||||||
)
|
|
||||||
proc.kill()
|
proc.kill()
|
||||||
err = e.stderr
|
err = e.stderr
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -26,8 +26,8 @@ from tractor import (
|
||||||
to_asyncio,
|
to_asyncio,
|
||||||
RemoteActorError,
|
RemoteActorError,
|
||||||
ContextCancelled,
|
ContextCancelled,
|
||||||
|
_state,
|
||||||
)
|
)
|
||||||
from tractor.runtime import _state
|
|
||||||
from tractor.trionics import BroadcastReceiver
|
from tractor.trionics import BroadcastReceiver
|
||||||
from tractor._testing import expect_ctxc
|
from tractor._testing import expect_ctxc
|
||||||
|
|
||||||
|
|
@ -47,11 +47,12 @@ async def sleep_and_err(
|
||||||
|
|
||||||
# just signature placeholders for compat with
|
# just signature placeholders for compat with
|
||||||
# ``to_asyncio.open_channel_from()``
|
# ``to_asyncio.open_channel_from()``
|
||||||
chan: to_asyncio.LinkedTaskChannel|None = None,
|
to_trio: trio.MemorySendChannel|None = None,
|
||||||
|
from_trio: asyncio.Queue|None = None,
|
||||||
|
|
||||||
):
|
):
|
||||||
if chan:
|
if to_trio:
|
||||||
chan.started_nowait('start')
|
to_trio.send_nowait('start')
|
||||||
|
|
||||||
await asyncio.sleep(sleep_for)
|
await asyncio.sleep(sleep_for)
|
||||||
assert 0
|
assert 0
|
||||||
|
|
@ -237,7 +238,7 @@ async def trio_ctx(
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
tractor.to_asyncio.open_channel_from(
|
tractor.to_asyncio.open_channel_from(
|
||||||
sleep_and_err,
|
sleep_and_err,
|
||||||
) as (chan, first),
|
) as (first, chan),
|
||||||
):
|
):
|
||||||
|
|
||||||
assert first == 'start'
|
assert first == 'start'
|
||||||
|
|
@ -398,7 +399,7 @@ async def no_to_trio_in_args():
|
||||||
|
|
||||||
async def push_from_aio_task(
|
async def push_from_aio_task(
|
||||||
sequence: Iterable,
|
sequence: Iterable,
|
||||||
chan: to_asyncio.LinkedTaskChannel,
|
to_trio: trio.abc.SendChannel,
|
||||||
expect_cancel: False,
|
expect_cancel: False,
|
||||||
fail_early: bool,
|
fail_early: bool,
|
||||||
exit_early: bool,
|
exit_early: bool,
|
||||||
|
|
@ -406,12 +407,15 @@ async def push_from_aio_task(
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# print('trying breakpoint')
|
||||||
|
# breakpoint()
|
||||||
|
|
||||||
# sync caller ctx manager
|
# sync caller ctx manager
|
||||||
chan.started_nowait(True)
|
to_trio.send_nowait(True)
|
||||||
|
|
||||||
for i in sequence:
|
for i in sequence:
|
||||||
print(f'asyncio sending {i}')
|
print(f'asyncio sending {i}')
|
||||||
chan.send_nowait(i)
|
to_trio.send_nowait(i)
|
||||||
await asyncio.sleep(0.001)
|
await asyncio.sleep(0.001)
|
||||||
|
|
||||||
if (
|
if (
|
||||||
|
|
@ -474,7 +478,7 @@ async def stream_from_aio(
|
||||||
trio_exit_early
|
trio_exit_early
|
||||||
))
|
))
|
||||||
|
|
||||||
) as (chan, first):
|
) as (first, chan):
|
||||||
|
|
||||||
assert first is True
|
assert first is True
|
||||||
|
|
||||||
|
|
@ -728,21 +732,15 @@ def test_aio_errors_and_channel_propagates_and_closes(
|
||||||
|
|
||||||
|
|
||||||
async def aio_echo_server(
|
async def aio_echo_server(
|
||||||
chan: to_asyncio.LinkedTaskChannel,
|
to_trio: trio.MemorySendChannel,
|
||||||
|
from_trio: asyncio.Queue,
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
|
||||||
An IPC-msg "echo server" with msgs received and relayed by
|
|
||||||
a parent `trio.Task` into a child `asyncio.Task`
|
|
||||||
and then repeated back to that local parent (`trio.Task`)
|
|
||||||
and sent again back to the original calling remote actor.
|
|
||||||
|
|
||||||
'''
|
to_trio.send_nowait('start')
|
||||||
# same semantics as `trio.TaskStatus.started()`
|
|
||||||
chan.started_nowait('start')
|
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
msg = await chan.get()
|
msg = await from_trio.get()
|
||||||
except to_asyncio.TrioTaskExited:
|
except to_asyncio.TrioTaskExited:
|
||||||
print(
|
print(
|
||||||
'breaking aio echo loop due to `trio` exit!'
|
'breaking aio echo loop due to `trio` exit!'
|
||||||
|
|
@ -750,7 +748,7 @@ async def aio_echo_server(
|
||||||
break
|
break
|
||||||
|
|
||||||
# echo the msg back
|
# echo the msg back
|
||||||
chan.send_nowait(msg)
|
to_trio.send_nowait(msg)
|
||||||
|
|
||||||
# if we get the terminate sentinel
|
# if we get the terminate sentinel
|
||||||
# break the echo loop
|
# break the echo loop
|
||||||
|
|
@ -767,10 +765,7 @@ async def trio_to_aio_echo_server(
|
||||||
):
|
):
|
||||||
async with to_asyncio.open_channel_from(
|
async with to_asyncio.open_channel_from(
|
||||||
aio_echo_server,
|
aio_echo_server,
|
||||||
) as (
|
) as (first, chan):
|
||||||
chan,
|
|
||||||
first, # value from `chan.started_nowait()` above
|
|
||||||
):
|
|
||||||
assert first == 'start'
|
assert first == 'start'
|
||||||
|
|
||||||
await ctx.started(first)
|
await ctx.started(first)
|
||||||
|
|
@ -781,8 +776,7 @@ async def trio_to_aio_echo_server(
|
||||||
await chan.send(msg)
|
await chan.send(msg)
|
||||||
|
|
||||||
out = await chan.receive()
|
out = await chan.receive()
|
||||||
|
# echo back to parent actor-task
|
||||||
# echo back to parent-actor's remote parent-ctx-task!
|
|
||||||
await stream.send(out)
|
await stream.send(out)
|
||||||
|
|
||||||
if out is None:
|
if out is None:
|
||||||
|
|
@ -1096,21 +1090,24 @@ def test_sigint_closes_lifetime_stack(
|
||||||
|
|
||||||
|
|
||||||
# ?TODO asyncio.Task fn-deco?
|
# ?TODO asyncio.Task fn-deco?
|
||||||
|
# -[ ] do sig checkingat import time like @context?
|
||||||
|
# -[ ] maybe name it @aio_task ??
|
||||||
# -[ ] chan: to_asyncio.InterloopChannel ??
|
# -[ ] chan: to_asyncio.InterloopChannel ??
|
||||||
# -[ ] do fn-sig checking at import time like @context?
|
|
||||||
# |_[ ] maybe name it @a(sync)io_task ??
|
|
||||||
# @asyncio_task <- not bad ??
|
|
||||||
async def raise_before_started(
|
async def raise_before_started(
|
||||||
|
# from_trio: asyncio.Queue,
|
||||||
|
# to_trio: trio.abc.SendChannel,
|
||||||
chan: to_asyncio.LinkedTaskChannel,
|
chan: to_asyncio.LinkedTaskChannel,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
`asyncio.Task` entry point which RTEs before calling
|
`asyncio.Task` entry point which RTEs before calling
|
||||||
`chan.started_nowait()`.
|
`to_trio.send_nowait()`.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
await asyncio.sleep(0.2)
|
await asyncio.sleep(0.2)
|
||||||
raise RuntimeError('Some shite went wrong before `.send_nowait()`!!')
|
raise RuntimeError('Some shite went wrong before `.send_nowait()`!!')
|
||||||
|
|
||||||
|
# to_trio.send_nowait('Uhh we shouldve RTE-d ^^ ??')
|
||||||
chan.started_nowait('Uhh we shouldve RTE-d ^^ ??')
|
chan.started_nowait('Uhh we shouldve RTE-d ^^ ??')
|
||||||
await asyncio.sleep(float('inf'))
|
await asyncio.sleep(float('inf'))
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -11,21 +11,18 @@ import trio
|
||||||
import tractor
|
import tractor
|
||||||
from tractor import ( # typing
|
from tractor import ( # typing
|
||||||
Actor,
|
Actor,
|
||||||
Context,
|
|
||||||
ContextCancelled,
|
|
||||||
MsgStream,
|
|
||||||
Portal,
|
|
||||||
RemoteActorError,
|
|
||||||
current_actor,
|
current_actor,
|
||||||
open_nursery,
|
open_nursery,
|
||||||
|
Portal,
|
||||||
|
Context,
|
||||||
|
ContextCancelled,
|
||||||
|
RemoteActorError,
|
||||||
)
|
)
|
||||||
from tractor._testing import (
|
from tractor._testing import (
|
||||||
# tractor_test,
|
# tractor_test,
|
||||||
expect_ctxc,
|
expect_ctxc,
|
||||||
)
|
)
|
||||||
|
|
||||||
from .conftest import cpu_scaling_factor
|
|
||||||
|
|
||||||
# XXX TODO cases:
|
# XXX TODO cases:
|
||||||
# - [x] WE cancelled the peer and thus should not see any raised
|
# - [x] WE cancelled the peer and thus should not see any raised
|
||||||
# `ContextCancelled` as it should be reaped silently?
|
# `ContextCancelled` as it should be reaped silently?
|
||||||
|
|
@ -203,7 +200,7 @@ async def stream_from_peer(
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
# sanity
|
# sanity
|
||||||
assert tractor.debug_mode() == debug_mode
|
assert tractor._state.debug_mode() == debug_mode
|
||||||
|
|
||||||
peer: Portal
|
peer: Portal
|
||||||
try:
|
try:
|
||||||
|
|
@ -583,7 +580,7 @@ def test_peer_canceller(
|
||||||
assert (
|
assert (
|
||||||
re.canceller
|
re.canceller
|
||||||
==
|
==
|
||||||
root.aid.uid
|
root.uid
|
||||||
)
|
)
|
||||||
|
|
||||||
else: # the other 2 ctxs
|
else: # the other 2 ctxs
|
||||||
|
|
@ -592,7 +589,7 @@ def test_peer_canceller(
|
||||||
and (
|
and (
|
||||||
re.canceller
|
re.canceller
|
||||||
==
|
==
|
||||||
canceller.channel.aid.uid
|
canceller.channel.uid
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -747,7 +744,7 @@ def test_peer_canceller(
|
||||||
# -> each context should have received
|
# -> each context should have received
|
||||||
# a silently absorbed context cancellation
|
# a silently absorbed context cancellation
|
||||||
# in its remote nursery scope.
|
# in its remote nursery scope.
|
||||||
# assert ctx.chan.aid.uid == ctx.canceller
|
# assert ctx.chan.uid == ctx.canceller
|
||||||
|
|
||||||
# NOTE: when an inter-peer cancellation
|
# NOTE: when an inter-peer cancellation
|
||||||
# occurred, we DO NOT expect this
|
# occurred, we DO NOT expect this
|
||||||
|
|
@ -799,12 +796,12 @@ async def basic_echo_server(
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
Just the simplest `MsgStream` echo server which resays what you
|
Just the simplest `MsgStream` echo server which resays what
|
||||||
told it but with its uid in front ;)
|
you told it but with its uid in front ;)
|
||||||
|
|
||||||
'''
|
'''
|
||||||
actor: Actor = tractor.current_actor()
|
actor: Actor = tractor.current_actor()
|
||||||
uid: tuple = actor.aid.uid
|
uid: tuple = actor.uid
|
||||||
await ctx.started(uid)
|
await ctx.started(uid)
|
||||||
async with ctx.open_stream() as ipc:
|
async with ctx.open_stream() as ipc:
|
||||||
async for msg in ipc:
|
async for msg in ipc:
|
||||||
|
|
@ -843,7 +840,7 @@ async def serve_subactors(
|
||||||
async with open_nursery() as an:
|
async with open_nursery() as an:
|
||||||
|
|
||||||
# sanity
|
# sanity
|
||||||
assert tractor.debug_mode() == debug_mode
|
assert tractor._state.debug_mode() == debug_mode
|
||||||
|
|
||||||
await ctx.started(peer_name)
|
await ctx.started(peer_name)
|
||||||
async with ctx.open_stream() as ipc:
|
async with ctx.open_stream() as ipc:
|
||||||
|
|
@ -859,7 +856,7 @@ async def serve_subactors(
|
||||||
f'|_{peer}\n'
|
f'|_{peer}\n'
|
||||||
)
|
)
|
||||||
await ipc.send((
|
await ipc.send((
|
||||||
peer.chan.aid.uid,
|
peer.chan.uid,
|
||||||
peer.chan.raddr.unwrap(),
|
peer.chan.raddr.unwrap(),
|
||||||
))
|
))
|
||||||
|
|
||||||
|
|
@ -882,7 +879,7 @@ async def client_req_subactor(
|
||||||
) -> None:
|
) -> None:
|
||||||
# sanity
|
# sanity
|
||||||
if debug_mode:
|
if debug_mode:
|
||||||
assert tractor.debug_mode()
|
assert tractor._state.debug_mode()
|
||||||
|
|
||||||
# TODO: other cases to do with sub lifetimes:
|
# TODO: other cases to do with sub lifetimes:
|
||||||
# -[ ] test that we can have the server spawn a sub
|
# -[ ] test that we can have the server spawn a sub
|
||||||
|
|
@ -969,14 +966,9 @@ async def tell_little_bro(
|
||||||
|
|
||||||
caller: str = '',
|
caller: str = '',
|
||||||
err_after: float|None = None,
|
err_after: float|None = None,
|
||||||
rng_seed: int = 100,
|
rng_seed: int = 50,
|
||||||
# NOTE, ensure ^ is large enough (on fast hw anyway)
|
|
||||||
# to ensure the peer cancel req arrives before the
|
|
||||||
# echoing dialog does itself Bp
|
|
||||||
):
|
):
|
||||||
# contact target actor, do a stream dialog.
|
# contact target actor, do a stream dialog.
|
||||||
lb: Portal
|
|
||||||
echo_ipc: MsgStream
|
|
||||||
async with (
|
async with (
|
||||||
tractor.wait_for_actor(
|
tractor.wait_for_actor(
|
||||||
name=actor_name
|
name=actor_name
|
||||||
|
|
@ -991,17 +983,17 @@ async def tell_little_bro(
|
||||||
else None
|
else None
|
||||||
),
|
),
|
||||||
) as (sub_ctx, first),
|
) as (sub_ctx, first),
|
||||||
|
|
||||||
sub_ctx.open_stream() as echo_ipc,
|
sub_ctx.open_stream() as echo_ipc,
|
||||||
):
|
):
|
||||||
actor: Actor = current_actor()
|
actor: Actor = current_actor()
|
||||||
uid: tuple = actor.aid.uid
|
uid: tuple = actor.uid
|
||||||
for i in range(rng_seed):
|
for i in range(rng_seed):
|
||||||
msg: tuple = (
|
msg: tuple = (
|
||||||
uid,
|
uid,
|
||||||
i,
|
i,
|
||||||
)
|
)
|
||||||
await echo_ipc.send(msg)
|
await echo_ipc.send(msg)
|
||||||
await trio.sleep(0.001)
|
|
||||||
resp = await echo_ipc.receive()
|
resp = await echo_ipc.receive()
|
||||||
print(
|
print(
|
||||||
f'{caller} => {actor_name}: {msg}\n'
|
f'{caller} => {actor_name}: {msg}\n'
|
||||||
|
|
@ -1014,9 +1006,6 @@ async def tell_little_bro(
|
||||||
assert sub_uid != uid
|
assert sub_uid != uid
|
||||||
assert _i == i
|
assert _i == i
|
||||||
|
|
||||||
# XXX, usually should never get here!
|
|
||||||
# await tractor.pause()
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
'raise_client_error',
|
'raise_client_error',
|
||||||
|
|
@ -1031,10 +1020,6 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
raise_client_error: str,
|
raise_client_error: str,
|
||||||
reg_addr: tuple[str, int],
|
reg_addr: tuple[str, int],
|
||||||
raise_sub_spawn_error_after: float|None,
|
raise_sub_spawn_error_after: float|None,
|
||||||
loglevel: str,
|
|
||||||
test_log: tractor.log.StackLevelAdapter,
|
|
||||||
# ^XXX, set to 'warning' to see masked-exc warnings
|
|
||||||
# that may transpire during actor-nursery teardown.
|
|
||||||
):
|
):
|
||||||
# NOTE: this tests for the modden `mod wks open piker` bug
|
# NOTE: this tests for the modden `mod wks open piker` bug
|
||||||
# discovered as part of implementing workspace ctx
|
# discovered as part of implementing workspace ctx
|
||||||
|
|
@ -1064,7 +1049,6 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
# NOTE: to halt the peer tasks on ctxc, uncomment this.
|
# NOTE: to halt the peer tasks on ctxc, uncomment this.
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
registry_addrs=[reg_addr],
|
registry_addrs=[reg_addr],
|
||||||
loglevel=loglevel,
|
|
||||||
) as an:
|
) as an:
|
||||||
server: Portal = await an.start_actor(
|
server: Portal = await an.start_actor(
|
||||||
(server_name := 'spawn_server'),
|
(server_name := 'spawn_server'),
|
||||||
|
|
@ -1100,7 +1084,7 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
) as (client_ctx, client_says),
|
) as (client_ctx, client_says),
|
||||||
):
|
):
|
||||||
root: Actor = current_actor()
|
root: Actor = current_actor()
|
||||||
spawner_uid: tuple = spawn_ctx.chan.aid.uid
|
spawner_uid: tuple = spawn_ctx.chan.uid
|
||||||
print(
|
print(
|
||||||
f'Server says: {first}\n'
|
f'Server says: {first}\n'
|
||||||
f'Client says: {client_says}\n'
|
f'Client says: {client_says}\n'
|
||||||
|
|
@ -1119,7 +1103,7 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
print(
|
print(
|
||||||
'Sub-spawn came online\n'
|
'Sub-spawn came online\n'
|
||||||
f'portal: {sub}\n'
|
f'portal: {sub}\n'
|
||||||
f'.uid: {sub.actor.aid.uid}\n'
|
f'.uid: {sub.actor.uid}\n'
|
||||||
f'chan.raddr: {sub.chan.raddr}\n'
|
f'chan.raddr: {sub.chan.raddr}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -1153,7 +1137,7 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
|
|
||||||
assert isinstance(res, ContextCancelled)
|
assert isinstance(res, ContextCancelled)
|
||||||
assert client_ctx.cancel_acked
|
assert client_ctx.cancel_acked
|
||||||
assert res.canceller == root.aid.uid
|
assert res.canceller == root.uid
|
||||||
assert not raise_sub_spawn_error_after
|
assert not raise_sub_spawn_error_after
|
||||||
|
|
||||||
# cancelling the spawner sub should
|
# cancelling the spawner sub should
|
||||||
|
|
@ -1187,8 +1171,8 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
# little_bro: a `RuntimeError`.
|
# little_bro: a `RuntimeError`.
|
||||||
#
|
#
|
||||||
check_inner_rte(rae)
|
check_inner_rte(rae)
|
||||||
assert rae.relay_uid == client.chan.aid.uid
|
assert rae.relay_uid == client.chan.uid
|
||||||
assert rae.src_uid == sub.chan.aid.uid
|
assert rae.src_uid == sub.chan.uid
|
||||||
|
|
||||||
assert not client_ctx.cancel_acked
|
assert not client_ctx.cancel_acked
|
||||||
assert (
|
assert (
|
||||||
|
|
@ -1217,12 +1201,12 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
except ContextCancelled as ctxc:
|
except ContextCancelled as ctxc:
|
||||||
_ctxc = ctxc
|
_ctxc = ctxc
|
||||||
print(
|
print(
|
||||||
f'{root.aid.uid} caught ctxc from ctx with {client_ctx.chan.aid.uid}\n'
|
f'{root.uid} caught ctxc from ctx with {client_ctx.chan.uid}\n'
|
||||||
f'{repr(ctxc)}\n'
|
f'{repr(ctxc)}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
if not raise_sub_spawn_error_after:
|
if not raise_sub_spawn_error_after:
|
||||||
assert ctxc.canceller == root.aid.uid
|
assert ctxc.canceller == root.uid
|
||||||
else:
|
else:
|
||||||
assert ctxc.canceller == spawner_uid
|
assert ctxc.canceller == spawner_uid
|
||||||
|
|
||||||
|
|
@ -1253,20 +1237,9 @@ def test_peer_spawns_and_cancels_service_subactor(
|
||||||
|
|
||||||
# assert spawn_ctx.cancelled_caught
|
# assert spawn_ctx.cancelled_caught
|
||||||
|
|
||||||
|
|
||||||
async def _main():
|
async def _main():
|
||||||
headroom: float = cpu_scaling_factor()
|
|
||||||
this_fast_on_linux: float = 3
|
|
||||||
this_fast = this_fast_on_linux * headroom
|
|
||||||
if headroom != 1.:
|
|
||||||
test_log.warning(
|
|
||||||
f'Adding latency headroom on linux bc CPU scaling,\n'
|
|
||||||
f'headroom: {headroom}\n'
|
|
||||||
f'this_fast_on_linux: {this_fast_on_linux} -> {this_fast}\n'
|
|
||||||
)
|
|
||||||
with trio.fail_after(
|
with trio.fail_after(
|
||||||
this_fast
|
3 if not debug_mode
|
||||||
if not debug_mode
|
|
||||||
else 999
|
else 999
|
||||||
):
|
):
|
||||||
await main()
|
await main()
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,9 @@
|
||||||
"""
|
"""
|
||||||
Streaming via the, now legacy, "async-gen API".
|
Streaming via async gen api
|
||||||
|
|
||||||
"""
|
"""
|
||||||
import time
|
import time
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import platform
|
import platform
|
||||||
from typing import Callable
|
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
import tractor
|
import tractor
|
||||||
|
|
@ -21,11 +19,7 @@ def test_must_define_ctx():
|
||||||
async def no_ctx():
|
async def no_ctx():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
assert (
|
assert "no_ctx must be `ctx: tractor.Context" in str(err.value)
|
||||||
"no_ctx must be `ctx: tractor.Context"
|
|
||||||
in
|
|
||||||
str(err.value)
|
|
||||||
)
|
|
||||||
|
|
||||||
@tractor.stream
|
@tractor.stream
|
||||||
async def has_ctx(ctx):
|
async def has_ctx(ctx):
|
||||||
|
|
@ -75,14 +69,14 @@ async def stream_from_single_subactor(
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
registry_addrs=[reg_addr],
|
registry_addrs=[reg_addr],
|
||||||
start_method=start_method,
|
start_method=start_method,
|
||||||
) as an:
|
) as nursery:
|
||||||
|
|
||||||
async with tractor.find_actor('streamerd') as portals:
|
async with tractor.find_actor('streamerd') as portals:
|
||||||
|
|
||||||
if not portals:
|
if not portals:
|
||||||
|
|
||||||
# no brokerd actor found
|
# no brokerd actor found
|
||||||
portal = await an.start_actor(
|
portal = await nursery.start_actor(
|
||||||
'streamerd',
|
'streamerd',
|
||||||
enable_modules=[__name__],
|
enable_modules=[__name__],
|
||||||
)
|
)
|
||||||
|
|
@ -122,22 +116,11 @@ async def stream_from_single_subactor(
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
'stream_func',
|
'stream_func', [async_gen_stream, context_stream]
|
||||||
[
|
|
||||||
async_gen_stream,
|
|
||||||
context_stream,
|
|
||||||
],
|
|
||||||
ids='stream_func={}'.format
|
|
||||||
)
|
)
|
||||||
def test_stream_from_single_subactor(
|
def test_stream_from_single_subactor(reg_addr, start_method, stream_func):
|
||||||
reg_addr: tuple,
|
"""Verify streaming from a spawned async generator.
|
||||||
start_method: str,
|
"""
|
||||||
stream_func: Callable,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify streaming from a spawned async generator.
|
|
||||||
|
|
||||||
'''
|
|
||||||
trio.run(
|
trio.run(
|
||||||
partial(
|
partial(
|
||||||
stream_from_single_subactor,
|
stream_from_single_subactor,
|
||||||
|
|
@ -149,9 +132,10 @@ def test_stream_from_single_subactor(
|
||||||
|
|
||||||
|
|
||||||
# this is the first 2 actors, streamer_1 and streamer_2
|
# this is the first 2 actors, streamer_1 and streamer_2
|
||||||
async def stream_data(seed: int):
|
async def stream_data(seed):
|
||||||
|
|
||||||
for i in range(seed):
|
for i in range(seed):
|
||||||
|
|
||||||
yield i
|
yield i
|
||||||
|
|
||||||
# trigger scheduler to simulate practical usage
|
# trigger scheduler to simulate practical usage
|
||||||
|
|
@ -159,17 +143,15 @@ async def stream_data(seed: int):
|
||||||
|
|
||||||
|
|
||||||
# this is the third actor; the aggregator
|
# this is the third actor; the aggregator
|
||||||
async def aggregate(seed: int):
|
async def aggregate(seed):
|
||||||
'''
|
"""Ensure that the two streams we receive match but only stream
|
||||||
Ensure that the two streams we receive match but only stream
|
|
||||||
a single set of values to the parent.
|
a single set of values to the parent.
|
||||||
|
"""
|
||||||
'''
|
async with tractor.open_nursery() as nursery:
|
||||||
async with tractor.open_nursery() as an:
|
|
||||||
portals = []
|
portals = []
|
||||||
for i in range(1, 3):
|
for i in range(1, 3):
|
||||||
# fork point
|
# fork point
|
||||||
portal = await an.start_actor(
|
portal = await nursery.start_actor(
|
||||||
name=f'streamer_{i}',
|
name=f'streamer_{i}',
|
||||||
enable_modules=[__name__],
|
enable_modules=[__name__],
|
||||||
)
|
)
|
||||||
|
|
@ -182,28 +164,20 @@ async def aggregate(seed: int):
|
||||||
async with send_chan:
|
async with send_chan:
|
||||||
|
|
||||||
async with portal.open_stream_from(
|
async with portal.open_stream_from(
|
||||||
stream_data,
|
stream_data, seed=seed,
|
||||||
seed=seed,
|
|
||||||
) as stream:
|
) as stream:
|
||||||
|
|
||||||
async for value in stream:
|
async for value in stream:
|
||||||
# leverage trio's built-in backpressure
|
# leverage trio's built-in backpressure
|
||||||
await send_chan.send(value)
|
await send_chan.send(value)
|
||||||
|
|
||||||
print(
|
print(f"FINISHED ITERATING {portal.channel.uid}")
|
||||||
f'FINISHED ITERATING!\n'
|
|
||||||
f'peer: {portal.channel.aid.uid}'
|
|
||||||
)
|
|
||||||
|
|
||||||
# spawn 2 trio tasks to collect streams and push to a local queue
|
# spawn 2 trio tasks to collect streams and push to a local queue
|
||||||
async with trio.open_nursery() as tn:
|
async with trio.open_nursery() as n:
|
||||||
|
|
||||||
for portal in portals:
|
for portal in portals:
|
||||||
tn.start_soon(
|
n.start_soon(push_to_chan, portal, send_chan.clone())
|
||||||
push_to_chan,
|
|
||||||
portal,
|
|
||||||
send_chan.clone(),
|
|
||||||
)
|
|
||||||
|
|
||||||
# close this local task's reference to send side
|
# close this local task's reference to send side
|
||||||
await send_chan.aclose()
|
await send_chan.aclose()
|
||||||
|
|
@ -220,21 +194,20 @@ async def aggregate(seed: int):
|
||||||
|
|
||||||
print("FINISHED ITERATING in aggregator")
|
print("FINISHED ITERATING in aggregator")
|
||||||
|
|
||||||
await an.cancel()
|
await nursery.cancel()
|
||||||
print("WAITING on `ActorNursery` to finish")
|
print("WAITING on `ActorNursery` to finish")
|
||||||
print("AGGREGATOR COMPLETE!")
|
print("AGGREGATOR COMPLETE!")
|
||||||
|
|
||||||
|
|
||||||
async def a_quadruple_example() -> list[int]:
|
# this is the main actor and *arbiter*
|
||||||
'''
|
async def a_quadruple_example():
|
||||||
Open the root-actor which is also a "registrar".
|
# a nursery which spawns "actors"
|
||||||
|
async with tractor.open_nursery() as nursery:
|
||||||
|
|
||||||
'''
|
|
||||||
async with tractor.open_nursery() as an:
|
|
||||||
seed = int(1e3)
|
seed = int(1e3)
|
||||||
pre_start = time.time()
|
pre_start = time.time()
|
||||||
|
|
||||||
portal = await an.start_actor(
|
portal = await nursery.start_actor(
|
||||||
name='aggregator',
|
name='aggregator',
|
||||||
enable_modules=[__name__],
|
enable_modules=[__name__],
|
||||||
)
|
)
|
||||||
|
|
@ -255,14 +228,8 @@ async def a_quadruple_example() -> list[int]:
|
||||||
return result_stream
|
return result_stream
|
||||||
|
|
||||||
|
|
||||||
async def cancel_after(
|
async def cancel_after(wait, reg_addr):
|
||||||
wait: float,
|
async with tractor.open_root_actor(registry_addrs=[reg_addr]):
|
||||||
reg_addr: tuple,
|
|
||||||
) -> list[int]:
|
|
||||||
|
|
||||||
async with tractor.open_root_actor(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
):
|
|
||||||
with trio.move_on_after(wait):
|
with trio.move_on_after(wait):
|
||||||
return await a_quadruple_example()
|
return await a_quadruple_example()
|
||||||
|
|
||||||
|
|
@ -273,10 +240,6 @@ def time_quad_ex(
|
||||||
ci_env: bool,
|
ci_env: bool,
|
||||||
spawn_backend: str,
|
spawn_backend: str,
|
||||||
):
|
):
|
||||||
non_linux: bool = (_sys := platform.system()) != 'Linux'
|
|
||||||
if ci_env and non_linux:
|
|
||||||
pytest.skip(f'Test is too flaky on {_sys!r} in CI')
|
|
||||||
|
|
||||||
if spawn_backend == 'mp':
|
if spawn_backend == 'mp':
|
||||||
'''
|
'''
|
||||||
no idea but the mp *nix runs are flaking out here often...
|
no idea but the mp *nix runs are flaking out here often...
|
||||||
|
|
@ -284,59 +247,32 @@ def time_quad_ex(
|
||||||
'''
|
'''
|
||||||
pytest.skip("Test is too flaky on mp in CI")
|
pytest.skip("Test is too flaky on mp in CI")
|
||||||
|
|
||||||
timeout = 7 if non_linux else 4
|
timeout = 7 if platform.system() in ('Windows', 'Darwin') else 4
|
||||||
start = time.time()
|
start = time.time()
|
||||||
results: list[int] = trio.run(
|
results = trio.run(cancel_after, timeout, reg_addr)
|
||||||
cancel_after,
|
diff = time.time() - start
|
||||||
timeout,
|
|
||||||
reg_addr,
|
|
||||||
)
|
|
||||||
diff: float = time.time() - start
|
|
||||||
assert results
|
assert results
|
||||||
return results, diff
|
return results, diff
|
||||||
|
|
||||||
|
|
||||||
def test_a_quadruple_example(
|
def test_a_quadruple_example(
|
||||||
time_quad_ex: tuple[list[int], float],
|
time_quad_ex: tuple,
|
||||||
ci_env: bool,
|
ci_env: bool,
|
||||||
spawn_backend: str,
|
spawn_backend: str,
|
||||||
test_log: tractor.log.StackLevelAdapter,
|
|
||||||
):
|
):
|
||||||
'''
|
'''
|
||||||
This also serves as a "we'd like to be this fast" smoke test
|
This also serves as a kind of "we'd like to be this fast test".
|
||||||
given past empirical eval of this suite.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
non_linux: bool = (_sys := platform.system()) != 'Linux'
|
|
||||||
|
|
||||||
this_fast_on_linux: float = 3
|
|
||||||
this_fast = (
|
|
||||||
6 if non_linux
|
|
||||||
else this_fast_on_linux
|
|
||||||
)
|
|
||||||
# ^ XXX NOTE,
|
|
||||||
# i've noticed that tweaking the CPU governor setting
|
|
||||||
# to not "always" enable "turbo" mode can result in latency
|
|
||||||
# which causes this limit to be too little. Not sure if it'd
|
|
||||||
# be worth it to adjust the linux value based on reading the
|
|
||||||
# CPU conf from the sys?
|
|
||||||
#
|
|
||||||
# For ex, see the `auto-cpufreq` docs on such settings,
|
|
||||||
# https://github.com/AdnanHodzic/auto-cpufreq?tab=readme-ov-file#example-config-file-contents
|
|
||||||
#
|
|
||||||
# HENCE this below latency-headroom compensation logic..
|
|
||||||
from .conftest import cpu_scaling_factor
|
|
||||||
headroom: float = cpu_scaling_factor()
|
|
||||||
if headroom != 1.:
|
|
||||||
this_fast = this_fast_on_linux * headroom
|
|
||||||
test_log.warning(
|
|
||||||
f'Adding latency headroom on linux bc CPU scaling,\n'
|
|
||||||
f'headroom: {headroom}\n'
|
|
||||||
f'this_fast_on_linux: {this_fast_on_linux} -> {this_fast}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
results, diff = time_quad_ex
|
results, diff = time_quad_ex
|
||||||
assert results
|
assert results
|
||||||
|
this_fast = (
|
||||||
|
6 if platform.system() in (
|
||||||
|
'Windows',
|
||||||
|
'Darwin',
|
||||||
|
)
|
||||||
|
else 3
|
||||||
|
)
|
||||||
assert diff < this_fast
|
assert diff < this_fast
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -345,58 +281,43 @@ def test_a_quadruple_example(
|
||||||
list(map(lambda i: i/10, range(3, 9)))
|
list(map(lambda i: i/10, range(3, 9)))
|
||||||
)
|
)
|
||||||
def test_not_fast_enough_quad(
|
def test_not_fast_enough_quad(
|
||||||
reg_addr: tuple,
|
reg_addr, time_quad_ex, cancel_delay, ci_env, spawn_backend
|
||||||
time_quad_ex: tuple[list[int], float],
|
|
||||||
cancel_delay: float,
|
|
||||||
ci_env: bool,
|
|
||||||
spawn_backend: str,
|
|
||||||
):
|
):
|
||||||
'''
|
"""Verify we can cancel midway through the quad example and all actors
|
||||||
Verify we can cancel midway through the quad example and all
|
cancel gracefully.
|
||||||
actors cancel gracefully.
|
"""
|
||||||
|
|
||||||
'''
|
|
||||||
results, diff = time_quad_ex
|
results, diff = time_quad_ex
|
||||||
delay = max(diff - cancel_delay, 0)
|
delay = max(diff - cancel_delay, 0)
|
||||||
results = trio.run(
|
results = trio.run(cancel_after, delay, reg_addr)
|
||||||
cancel_after,
|
system = platform.system()
|
||||||
delay,
|
if system in ('Windows', 'Darwin') and results is not None:
|
||||||
reg_addr,
|
|
||||||
)
|
|
||||||
system: str = platform.system()
|
|
||||||
if (
|
|
||||||
system in ('Windows', 'Darwin')
|
|
||||||
and
|
|
||||||
results is not None
|
|
||||||
):
|
|
||||||
# In CI envoirments it seems later runs are quicker then the first
|
# In CI envoirments it seems later runs are quicker then the first
|
||||||
# so just ignore these
|
# so just ignore these
|
||||||
print(f'Woa there {system} caught your breath eh?')
|
print(f"Woa there {system} caught your breath eh?")
|
||||||
else:
|
else:
|
||||||
# should be cancelled mid-streaming
|
# should be cancelled mid-streaming
|
||||||
assert results is None
|
assert results is None
|
||||||
|
|
||||||
|
|
||||||
@tractor_test(timeout=20)
|
@tractor_test
|
||||||
async def test_respawn_consumer_task(
|
async def test_respawn_consumer_task(
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
spawn_backend: str,
|
spawn_backend,
|
||||||
loglevel: str,
|
loglevel,
|
||||||
):
|
):
|
||||||
'''
|
"""Verify that ``._portal.ReceiveStream.shield()``
|
||||||
Verify that ``._portal.ReceiveStream.shield()``
|
|
||||||
sucessfully protects the underlying IPC channel from being closed
|
sucessfully protects the underlying IPC channel from being closed
|
||||||
when cancelling and respawning a consumer task.
|
when cancelling and respawning a consumer task.
|
||||||
|
|
||||||
This also serves to verify that all values from the stream can be
|
This also serves to verify that all values from the stream can be
|
||||||
received despite the respawns.
|
received despite the respawns.
|
||||||
|
|
||||||
'''
|
"""
|
||||||
stream = None
|
stream = None
|
||||||
|
|
||||||
async with tractor.open_nursery() as an:
|
async with tractor.open_nursery() as n:
|
||||||
|
|
||||||
portal = await an.start_actor(
|
portal = await n.start_actor(
|
||||||
name='streamer',
|
name='streamer',
|
||||||
enable_modules=[__name__]
|
enable_modules=[__name__]
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
"""
|
"""
|
||||||
Registrar and "local" actor api
|
Arbiter and "local" actor api
|
||||||
"""
|
"""
|
||||||
import time
|
import time
|
||||||
|
|
||||||
|
|
@ -12,11 +12,11 @@ from tractor._testing import tractor_test
|
||||||
|
|
||||||
@pytest.mark.trio
|
@pytest.mark.trio
|
||||||
async def test_no_runtime():
|
async def test_no_runtime():
|
||||||
"""A registrar must be established before any nurseries
|
"""An arbitter must be established before any nurseries
|
||||||
can be created.
|
can be created.
|
||||||
|
|
||||||
(In other words ``tractor.open_root_actor()`` must be
|
(In other words ``tractor.open_root_actor()`` must be engaged at
|
||||||
engaged at some point?)
|
some point?)
|
||||||
"""
|
"""
|
||||||
with pytest.raises(RuntimeError) :
|
with pytest.raises(RuntimeError) :
|
||||||
async with tractor.find_actor('doggy'):
|
async with tractor.find_actor('doggy'):
|
||||||
|
|
@ -25,9 +25,9 @@ async def test_no_runtime():
|
||||||
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_self_is_registered(reg_addr):
|
async def test_self_is_registered(reg_addr):
|
||||||
"Verify waiting on the registrar to register itself using the standard api."
|
"Verify waiting on the arbiter to register itself using the standard api."
|
||||||
actor = tractor.current_actor()
|
actor = tractor.current_actor()
|
||||||
assert actor.is_registrar
|
assert actor.is_arbiter
|
||||||
with trio.fail_after(0.2):
|
with trio.fail_after(0.2):
|
||||||
async with tractor.wait_for_actor('root') as portal:
|
async with tractor.wait_for_actor('root') as portal:
|
||||||
assert portal.channel.uid[0] == 'root'
|
assert portal.channel.uid[0] == 'root'
|
||||||
|
|
@ -35,11 +35,11 @@ async def test_self_is_registered(reg_addr):
|
||||||
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_self_is_registered_localportal(reg_addr):
|
async def test_self_is_registered_localportal(reg_addr):
|
||||||
"Verify waiting on the registrar to register itself using a local portal."
|
"Verify waiting on the arbiter to register itself using a local portal."
|
||||||
actor = tractor.current_actor()
|
actor = tractor.current_actor()
|
||||||
assert actor.is_registrar
|
assert actor.is_arbiter
|
||||||
async with tractor.get_registry(reg_addr) as portal:
|
async with tractor.get_registry(reg_addr) as portal:
|
||||||
assert isinstance(portal, tractor.runtime._portal.LocalPortal)
|
assert isinstance(portal, tractor._portal.LocalPortal)
|
||||||
|
|
||||||
with trio.fail_after(0.2):
|
with trio.fail_after(0.2):
|
||||||
sockaddr = await portal.run_from_ns(
|
sockaddr = await portal.run_from_ns(
|
||||||
|
|
@ -57,8 +57,8 @@ def test_local_actor_async_func(reg_addr):
|
||||||
async with tractor.open_root_actor(
|
async with tractor.open_root_actor(
|
||||||
registry_addrs=[reg_addr],
|
registry_addrs=[reg_addr],
|
||||||
):
|
):
|
||||||
# registrar is started in-proc if dne
|
# arbiter is started in-proc if dne
|
||||||
assert tractor.current_actor().is_registrar
|
assert tractor.current_actor().is_arbiter
|
||||||
|
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
nums.append(i)
|
nums.append(i)
|
||||||
|
|
|
||||||
|
|
@ -1,185 +0,0 @@
|
||||||
'''
|
|
||||||
`tractor.log`-wrapping unit tests.
|
|
||||||
|
|
||||||
'''
|
|
||||||
from pathlib import Path
|
|
||||||
import shutil
|
|
||||||
from types import ModuleType
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
import tractor
|
|
||||||
from tractor import (
|
|
||||||
_code_load,
|
|
||||||
log,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_root_pkg_not_duplicated_in_logger_name():
|
|
||||||
'''
|
|
||||||
When both `pkg_name` and `name` are passed and they have
|
|
||||||
a common `<root_name>.< >` prefix, ensure that it is not
|
|
||||||
duplicated in the child's `StackLevelAdapter.name: str`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
project_name: str = 'pylib'
|
|
||||||
pkg_path: str = 'pylib.subpkg.mod'
|
|
||||||
|
|
||||||
assert not tractor.current_actor(
|
|
||||||
err_on_no_runtime=False,
|
|
||||||
)
|
|
||||||
proj_log = log.get_logger(
|
|
||||||
pkg_name=project_name,
|
|
||||||
mk_sublog=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
sublog = log.get_logger(
|
|
||||||
pkg_name=project_name,
|
|
||||||
name=pkg_path,
|
|
||||||
)
|
|
||||||
|
|
||||||
assert proj_log is not sublog
|
|
||||||
assert sublog.name.count(proj_log.name) == 1
|
|
||||||
assert 'mod' not in sublog.name
|
|
||||||
|
|
||||||
|
|
||||||
def test_implicit_mod_name_applied_for_child(
|
|
||||||
testdir: pytest.Pytester,
|
|
||||||
loglevel: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify that when `.log.get_logger(pkg_name='pylib')` is called
|
|
||||||
from a given sub-mod from within the `pylib` pkg-path, we
|
|
||||||
implicitly set the equiv of `name=__name__` from the caller's
|
|
||||||
module.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# tractor.log.get_console_log(level=loglevel)
|
|
||||||
proj_name: str = 'snakelib'
|
|
||||||
mod_code: str = (
|
|
||||||
f'import tractor\n'
|
|
||||||
f'\n'
|
|
||||||
# if you need to trace `testdir` stuff @ import-time..
|
|
||||||
# f'breakpoint()\n'
|
|
||||||
f'log = tractor.log.get_logger(pkg_name="{proj_name}")\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# create a sub-module for each pkg layer
|
|
||||||
_lib = testdir.mkpydir(proj_name)
|
|
||||||
pkg: Path = Path(_lib)
|
|
||||||
pkg_init_mod: Path = pkg / "__init__.py"
|
|
||||||
pkg_init_mod.write_text(mod_code)
|
|
||||||
|
|
||||||
subpkg: Path = pkg / 'subpkg'
|
|
||||||
subpkg.mkdir()
|
|
||||||
subpkgmod: Path = subpkg / "__init__.py"
|
|
||||||
subpkgmod.touch()
|
|
||||||
subpkgmod.write_text(mod_code)
|
|
||||||
|
|
||||||
_submod: Path = testdir.makepyfile(
|
|
||||||
_mod=mod_code,
|
|
||||||
)
|
|
||||||
|
|
||||||
pkg_submod = pkg / 'mod.py'
|
|
||||||
pkg_subpkg_submod = subpkg / 'submod.py'
|
|
||||||
shutil.copyfile(
|
|
||||||
_submod,
|
|
||||||
pkg_submod,
|
|
||||||
)
|
|
||||||
shutil.copyfile(
|
|
||||||
_submod,
|
|
||||||
pkg_subpkg_submod,
|
|
||||||
)
|
|
||||||
testdir.chdir()
|
|
||||||
# NOTE, to introspect the py-file-module-layout use (in .xsh
|
|
||||||
# syntax): `ranger @str(testdir)`
|
|
||||||
|
|
||||||
# XXX NOTE, once the "top level" pkg mod has been
|
|
||||||
# imported, we can then use `import` syntax to
|
|
||||||
# import it's sub-pkgs and modules.
|
|
||||||
subpkgmod: ModuleType = _code_load.load_module_from_path(
|
|
||||||
Path(pkg / '__init__.py'),
|
|
||||||
module_name=proj_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
pkg_root_log = log.get_logger(
|
|
||||||
pkg_name=proj_name,
|
|
||||||
mk_sublog=False,
|
|
||||||
)
|
|
||||||
# the top level pkg-mod, created just now,
|
|
||||||
# by above API call.
|
|
||||||
assert pkg_root_log.name == proj_name
|
|
||||||
assert not pkg_root_log.logger.getChildren()
|
|
||||||
#
|
|
||||||
# ^TODO! test this same output but created via a `get_logger()`
|
|
||||||
# call in the `snakelib.__init__py`!!
|
|
||||||
|
|
||||||
# NOTE, the pkg-level "init mod" should of course
|
|
||||||
# have the same name as the package ns-path.
|
|
||||||
import snakelib as init_mod
|
|
||||||
assert init_mod.log.name == proj_name
|
|
||||||
|
|
||||||
# NOTE, a first-pkg-level sub-module should only
|
|
||||||
# use the package-name since the leaf-node-module
|
|
||||||
# will be included in log headers by default.
|
|
||||||
from snakelib import mod
|
|
||||||
assert mod.log.name == proj_name
|
|
||||||
|
|
||||||
from snakelib import subpkg
|
|
||||||
assert (
|
|
||||||
subpkg.log.name
|
|
||||||
==
|
|
||||||
subpkg.__package__
|
|
||||||
==
|
|
||||||
f'{proj_name}.subpkg'
|
|
||||||
)
|
|
||||||
|
|
||||||
from snakelib.subpkg import submod
|
|
||||||
assert (
|
|
||||||
submod.log.name
|
|
||||||
==
|
|
||||||
submod.__package__
|
|
||||||
==
|
|
||||||
f'{proj_name}.subpkg'
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_logs = pkg_root_log.logger.getChildren()
|
|
||||||
assert len(sub_logs) == 1 # only one nested sub-pkg module
|
|
||||||
assert submod.log.logger in sub_logs
|
|
||||||
|
|
||||||
|
|
||||||
# TODO, moar tests against existing feats:
|
|
||||||
# ------ - ------
|
|
||||||
# - [ ] color settings?
|
|
||||||
# - [ ] header contents like,
|
|
||||||
# - actor + thread + task names from various conc-primitives,
|
|
||||||
# - [ ] `StackLevelAdapter` extensions,
|
|
||||||
# - our custom levels/methods: `transport|runtime|cance|pdb|devx`
|
|
||||||
# - [ ] custom-headers support?
|
|
||||||
#
|
|
||||||
|
|
||||||
# TODO, test driven dev of new-ideas/long-wanted feats,
|
|
||||||
# ------ - ------
|
|
||||||
# - [ ] https://github.com/goodboy/tractor/issues/244
|
|
||||||
# - [ ] @catern mentioned using a sync / deterministic sys
|
|
||||||
# and in particular `svlogd`?
|
|
||||||
# |_ https://smarden.org/runit/svlogd.8
|
|
||||||
|
|
||||||
# - [ ] using adapter vs. filters?
|
|
||||||
# - https://stackoverflow.com/questions/60691759/add-information-to-every-log-message-in-python-logging/61830838#61830838
|
|
||||||
|
|
||||||
# - [ ] `.at_least_level()` optimization which short circuits wtv
|
|
||||||
# `logging` is doing behind the scenes when the level filters
|
|
||||||
# the emission..?
|
|
||||||
|
|
||||||
# - [ ] use of `.log.get_console_log()` in subactors and the
|
|
||||||
# subtleties of ensuring it actually emits from a subproc.
|
|
||||||
|
|
||||||
# - [ ] this idea of activating per-subsys emissions with some
|
|
||||||
# kind of `.name` filter passed to the runtime or maybe configured
|
|
||||||
# via the root `StackLevelAdapter`?
|
|
||||||
|
|
||||||
# - [ ] use of `logging.dict.dictConfig()` to simplify the impl
|
|
||||||
# of any of ^^ ??
|
|
||||||
# - https://stackoverflow.com/questions/7507825/where-is-a-complete-example-of-logging-config-dictconfig
|
|
||||||
# - https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
|
|
||||||
# - https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig
|
|
||||||
|
|
@ -1,13 +1,8 @@
|
||||||
"""
|
"""
|
||||||
Multiple python programs invoking the runtime.
|
Multiple python programs invoking the runtime.
|
||||||
"""
|
"""
|
||||||
from __future__ import annotations
|
|
||||||
import platform
|
import platform
|
||||||
import subprocess
|
|
||||||
import time
|
import time
|
||||||
from typing import (
|
|
||||||
TYPE_CHECKING,
|
|
||||||
)
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
import trio
|
import trio
|
||||||
|
|
@ -15,32 +10,14 @@ import tractor
|
||||||
from tractor._testing import (
|
from tractor._testing import (
|
||||||
tractor_test,
|
tractor_test,
|
||||||
)
|
)
|
||||||
from tractor import (
|
|
||||||
current_actor,
|
|
||||||
Actor,
|
|
||||||
Context,
|
|
||||||
Portal,
|
|
||||||
)
|
|
||||||
from tractor.runtime import _state
|
|
||||||
from .conftest import (
|
from .conftest import (
|
||||||
sig_prog,
|
sig_prog,
|
||||||
_INT_SIGNAL,
|
_INT_SIGNAL,
|
||||||
_INT_RETURN_CODE,
|
_INT_RETURN_CODE,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from tractor.msg import Aid
|
|
||||||
from tractor.discovery._addr import (
|
|
||||||
UnwrappedAddress,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
def test_abort_on_sigint(daemon):
|
||||||
_non_linux: bool = platform.system() != 'Linux'
|
|
||||||
|
|
||||||
|
|
||||||
def test_abort_on_sigint(
|
|
||||||
daemon: subprocess.Popen,
|
|
||||||
):
|
|
||||||
assert daemon.returncode is None
|
assert daemon.returncode is None
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
sig_prog(daemon, _INT_SIGNAL)
|
sig_prog(daemon, _INT_SIGNAL)
|
||||||
|
|
@ -53,131 +30,39 @@ def test_abort_on_sigint(
|
||||||
|
|
||||||
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_cancel_remote_registrar(
|
async def test_cancel_remote_arbiter(daemon, reg_addr):
|
||||||
daemon: subprocess.Popen,
|
assert not tractor.current_actor().is_arbiter
|
||||||
reg_addr: UnwrappedAddress,
|
|
||||||
):
|
|
||||||
assert not current_actor().is_registrar
|
|
||||||
async with tractor.get_registry(reg_addr) as portal:
|
async with tractor.get_registry(reg_addr) as portal:
|
||||||
await portal.cancel_actor()
|
await portal.cancel_actor()
|
||||||
|
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
# the registrar channel server is cancelled but not its main task
|
# the arbiter channel server is cancelled but not its main task
|
||||||
assert daemon.returncode is None
|
assert daemon.returncode is None
|
||||||
|
|
||||||
# no registrar socket should exist
|
# no arbiter socket should exist
|
||||||
with pytest.raises(OSError):
|
with pytest.raises(OSError):
|
||||||
async with tractor.get_registry(reg_addr) as portal:
|
async with tractor.get_registry(reg_addr) as portal:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
def test_register_duplicate_name(
|
def test_register_duplicate_name(daemon, reg_addr):
|
||||||
daemon: subprocess.Popen,
|
|
||||||
reg_addr: UnwrappedAddress,
|
|
||||||
):
|
|
||||||
async def main():
|
async def main():
|
||||||
|
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
registry_addrs=[reg_addr],
|
registry_addrs=[reg_addr],
|
||||||
) as an:
|
) as n:
|
||||||
|
|
||||||
assert not current_actor().is_registrar
|
assert not tractor.current_actor().is_arbiter
|
||||||
|
|
||||||
p1 = await an.start_actor('doggy')
|
p1 = await n.start_actor('doggy')
|
||||||
p2 = await an.start_actor('doggy')
|
p2 = await n.start_actor('doggy')
|
||||||
|
|
||||||
async with tractor.wait_for_actor('doggy') as portal:
|
async with tractor.wait_for_actor('doggy') as portal:
|
||||||
assert portal.channel.uid in (p2.channel.uid, p1.channel.uid)
|
assert portal.channel.uid in (p2.channel.uid, p1.channel.uid)
|
||||||
|
|
||||||
await an.cancel()
|
await n.cancel()
|
||||||
|
|
||||||
# XXX, run manually since we want to start this root **after**
|
# run it manually since we want to start **after**
|
||||||
# the other "daemon" program with it's own root.
|
# the other "daemon" program
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
|
||||||
async def get_root_portal(
|
|
||||||
ctx: Context,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Connect back to the root actor manually (using `._discovery` API)
|
|
||||||
and ensure it's contact info is the same as our immediate parent.
|
|
||||||
|
|
||||||
'''
|
|
||||||
sub: Actor = current_actor()
|
|
||||||
rtvs: dict = _state._runtime_vars
|
|
||||||
raddrs: list[UnwrappedAddress] = rtvs['_root_addrs']
|
|
||||||
|
|
||||||
# await tractor.pause()
|
|
||||||
# XXX, in case the sub->root discovery breaks you might need
|
|
||||||
# this (i know i did Xp)!!
|
|
||||||
# from tractor.devx import mk_pdb
|
|
||||||
# mk_pdb().set_trace()
|
|
||||||
|
|
||||||
assert (
|
|
||||||
len(raddrs) == 1
|
|
||||||
and
|
|
||||||
list(sub._parent_chan.raddr.unwrap()) in raddrs
|
|
||||||
)
|
|
||||||
|
|
||||||
# connect back to our immediate parent which should also
|
|
||||||
# be the actor-tree's root.
|
|
||||||
from tractor.discovery._api import get_root
|
|
||||||
ptl: Portal
|
|
||||||
async with get_root() as ptl:
|
|
||||||
root_aid: Aid = ptl.chan.aid
|
|
||||||
parent_ptl: Portal = current_actor().get_parent()
|
|
||||||
assert (
|
|
||||||
root_aid.name == 'root'
|
|
||||||
and
|
|
||||||
parent_ptl.chan.aid == root_aid
|
|
||||||
)
|
|
||||||
await ctx.started()
|
|
||||||
|
|
||||||
|
|
||||||
def test_non_registrar_spawns_child(
|
|
||||||
daemon: subprocess.Popen,
|
|
||||||
reg_addr: UnwrappedAddress,
|
|
||||||
loglevel: str,
|
|
||||||
debug_mode: bool,
|
|
||||||
ci_env: bool,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Ensure a non-regristar (serving) root actor can spawn a sub and
|
|
||||||
that sub can connect back (manually) to it's rent that is the
|
|
||||||
root without issue.
|
|
||||||
|
|
||||||
More or less this audits the global contact info in
|
|
||||||
`._state._runtime_vars`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async def main():
|
|
||||||
|
|
||||||
# XXX, since apparently on macos in GH's CI it can be a race
|
|
||||||
# with the `daemon` registrar on grabbing the socket-addr..
|
|
||||||
if ci_env and _non_linux:
|
|
||||||
await trio.sleep(.5)
|
|
||||||
|
|
||||||
async with tractor.open_nursery(
|
|
||||||
registry_addrs=[reg_addr],
|
|
||||||
loglevel=loglevel,
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
) as an:
|
|
||||||
|
|
||||||
actor: Actor = tractor.current_actor()
|
|
||||||
assert not actor.is_registrar
|
|
||||||
sub_ptl: Portal = await an.start_actor(
|
|
||||||
name='sub',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
|
|
||||||
async with sub_ptl.open_context(
|
|
||||||
get_root_portal,
|
|
||||||
) as (ctx, _):
|
|
||||||
print('Waiting for `sub` to connect back to us..')
|
|
||||||
|
|
||||||
await an.cancel()
|
|
||||||
|
|
||||||
# XXX, run manually since we want to start this root **after**
|
|
||||||
# the other "daemon" program with it's own root.
|
|
||||||
trio.run(main)
|
trio.run(main)
|
||||||
|
|
|
||||||
|
|
@ -61,7 +61,7 @@ async def maybe_expect_raises(
|
||||||
Async wrapper for ensuring errors propagate from the inner scope.
|
Async wrapper for ensuring errors propagate from the inner scope.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if tractor.debug_mode():
|
if tractor._state.debug_mode():
|
||||||
timeout += 999
|
timeout += 999
|
||||||
|
|
||||||
with trio.fail_after(timeout):
|
with trio.fail_after(timeout):
|
||||||
|
|
@ -1,333 +0,0 @@
|
||||||
'''
|
|
||||||
Verify that externally registered remote actor error
|
|
||||||
types are correctly relayed, boxed, and re-raised across
|
|
||||||
IPC actor hops via `reg_err_types()`.
|
|
||||||
|
|
||||||
Also ensure that when custom error types are NOT registered
|
|
||||||
the framework indicates the lookup failure to the user.
|
|
||||||
|
|
||||||
'''
|
|
||||||
import pytest
|
|
||||||
import trio
|
|
||||||
import tractor
|
|
||||||
from tractor import (
|
|
||||||
Context,
|
|
||||||
Portal,
|
|
||||||
RemoteActorError,
|
|
||||||
)
|
|
||||||
from tractor._exceptions import (
|
|
||||||
get_err_type,
|
|
||||||
reg_err_types,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# -- custom app-level errors for testing --
|
|
||||||
class CustomAppError(Exception):
|
|
||||||
'''
|
|
||||||
A hypothetical user-app error that should be
|
|
||||||
boxed+relayed by `tractor` IPC when registered.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
class AnotherAppError(Exception):
|
|
||||||
'''
|
|
||||||
A second custom error for multi-type registration.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
class UnregisteredAppError(Exception):
|
|
||||||
'''
|
|
||||||
A custom error that is intentionally NEVER
|
|
||||||
registered via `reg_err_types()` so we can
|
|
||||||
verify the framework's failure indication.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
# -- remote-task endpoints --
|
|
||||||
@tractor.context
|
|
||||||
async def raise_custom_err(
|
|
||||||
ctx: Context,
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Remote ep that raises a `CustomAppError`
|
|
||||||
after sync-ing with the caller.
|
|
||||||
|
|
||||||
'''
|
|
||||||
await ctx.started()
|
|
||||||
raise CustomAppError(
|
|
||||||
'the app exploded remotely'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
|
||||||
async def raise_another_err(
|
|
||||||
ctx: Context,
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Remote ep that raises `AnotherAppError`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
await ctx.started()
|
|
||||||
raise AnotherAppError(
|
|
||||||
'another app-level kaboom'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@tractor.context
|
|
||||||
async def raise_unreg_err(
|
|
||||||
ctx: Context,
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Remote ep that raises an `UnregisteredAppError`
|
|
||||||
which has NOT been `reg_err_types()`-registered.
|
|
||||||
|
|
||||||
'''
|
|
||||||
await ctx.started()
|
|
||||||
raise UnregisteredAppError(
|
|
||||||
'this error type is unknown to tractor'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# -- unit tests for the type-registry plumbing --
|
|
||||||
|
|
||||||
class TestRegErrTypesPlumbing:
|
|
||||||
'''
|
|
||||||
Low-level checks on `reg_err_types()` and
|
|
||||||
`get_err_type()` without requiring IPC.
|
|
||||||
|
|
||||||
'''
|
|
||||||
|
|
||||||
def test_unregistered_type_returns_none(self):
|
|
||||||
'''
|
|
||||||
An unregistered custom error name should yield
|
|
||||||
`None` from `get_err_type()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
result = get_err_type('CustomAppError')
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
def test_register_and_lookup(self):
|
|
||||||
'''
|
|
||||||
After `reg_err_types()`, the custom type should
|
|
||||||
be discoverable via `get_err_type()`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_err_types([CustomAppError])
|
|
||||||
result = get_err_type('CustomAppError')
|
|
||||||
assert result is CustomAppError
|
|
||||||
|
|
||||||
def test_register_multiple_types(self):
|
|
||||||
'''
|
|
||||||
Registering a list of types should make each
|
|
||||||
one individually resolvable.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_err_types([
|
|
||||||
CustomAppError,
|
|
||||||
AnotherAppError,
|
|
||||||
])
|
|
||||||
assert (
|
|
||||||
get_err_type('CustomAppError')
|
|
||||||
is CustomAppError
|
|
||||||
)
|
|
||||||
assert (
|
|
||||||
get_err_type('AnotherAppError')
|
|
||||||
is AnotherAppError
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_builtin_types_always_resolve(self):
|
|
||||||
'''
|
|
||||||
Builtin error types like `RuntimeError` and
|
|
||||||
`ValueError` should always be found without
|
|
||||||
any prior registration.
|
|
||||||
|
|
||||||
'''
|
|
||||||
assert (
|
|
||||||
get_err_type('RuntimeError')
|
|
||||||
is RuntimeError
|
|
||||||
)
|
|
||||||
assert (
|
|
||||||
get_err_type('ValueError')
|
|
||||||
is ValueError
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_tractor_native_types_resolve(self):
|
|
||||||
'''
|
|
||||||
`tractor`-internal exc types (e.g.
|
|
||||||
`ContextCancelled`) should always resolve.
|
|
||||||
|
|
||||||
'''
|
|
||||||
assert (
|
|
||||||
get_err_type('ContextCancelled')
|
|
||||||
is tractor.ContextCancelled
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_boxed_type_str_without_ipc_msg(self):
|
|
||||||
'''
|
|
||||||
When a `RemoteActorError` is constructed
|
|
||||||
without an IPC msg (and no resolvable type),
|
|
||||||
`.boxed_type_str` should return `'<unknown>'`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
rae = RemoteActorError('test')
|
|
||||||
assert rae.boxed_type_str == '<unknown>'
|
|
||||||
|
|
||||||
|
|
||||||
# -- IPC-level integration tests --
|
|
||||||
|
|
||||||
def test_registered_custom_err_relayed(
|
|
||||||
debug_mode: bool,
|
|
||||||
tpt_proto: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
When a custom error type is registered via
|
|
||||||
`reg_err_types()` on BOTH sides of an IPC dialog,
|
|
||||||
the parent should receive a `RemoteActorError`
|
|
||||||
whose `.boxed_type` matches the original custom
|
|
||||||
error type.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_err_types([CustomAppError])
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
async with tractor.open_nursery(
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
enable_transports=[tpt_proto],
|
|
||||||
) as an:
|
|
||||||
ptl: Portal = await an.start_actor(
|
|
||||||
'custom-err-raiser',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
async with ptl.open_context(
|
|
||||||
raise_custom_err,
|
|
||||||
) as (ctx, sent):
|
|
||||||
assert not sent
|
|
||||||
try:
|
|
||||||
await ctx.wait_for_result()
|
|
||||||
except RemoteActorError as rae:
|
|
||||||
assert rae.boxed_type is CustomAppError
|
|
||||||
assert rae.src_type is CustomAppError
|
|
||||||
assert 'the app exploded remotely' in str(
|
|
||||||
rae.tb_str
|
|
||||||
)
|
|
||||||
raise
|
|
||||||
|
|
||||||
with pytest.raises(RemoteActorError) as excinfo:
|
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
rae = excinfo.value
|
|
||||||
assert rae.boxed_type is CustomAppError
|
|
||||||
|
|
||||||
|
|
||||||
def test_registered_another_err_relayed(
|
|
||||||
debug_mode: bool,
|
|
||||||
tpt_proto: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Same as above but for a different custom error
|
|
||||||
type to verify multi-type registration works
|
|
||||||
end-to-end over IPC.
|
|
||||||
|
|
||||||
'''
|
|
||||||
reg_err_types([AnotherAppError])
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
async with tractor.open_nursery(
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
enable_transports=[tpt_proto],
|
|
||||||
) as an:
|
|
||||||
ptl: Portal = await an.start_actor(
|
|
||||||
'another-err-raiser',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
async with ptl.open_context(
|
|
||||||
raise_another_err,
|
|
||||||
) as (ctx, sent):
|
|
||||||
assert not sent
|
|
||||||
try:
|
|
||||||
await ctx.wait_for_result()
|
|
||||||
except RemoteActorError as rae:
|
|
||||||
assert (
|
|
||||||
rae.boxed_type
|
|
||||||
is AnotherAppError
|
|
||||||
)
|
|
||||||
raise
|
|
||||||
|
|
||||||
await an.cancel()
|
|
||||||
|
|
||||||
with pytest.raises(RemoteActorError) as excinfo:
|
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
rae = excinfo.value
|
|
||||||
assert rae.boxed_type is AnotherAppError
|
|
||||||
|
|
||||||
|
|
||||||
def test_unregistered_err_still_relayed(
|
|
||||||
debug_mode: bool,
|
|
||||||
tpt_proto: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify that even when a custom error type is NOT registered via
|
|
||||||
`reg_err_types()`, the remote error is still relayed as
|
|
||||||
a `RemoteActorError` with all string-level info preserved
|
|
||||||
(traceback, type name, source actor uid).
|
|
||||||
|
|
||||||
The `.boxed_type` will be `None` (type obj can't be resolved) but
|
|
||||||
`.boxed_type_str` and `.src_type_str` still report the original
|
|
||||||
type name from the IPC msg.
|
|
||||||
|
|
||||||
This documents the expected limitation: without `reg_err_types()`
|
|
||||||
the `.boxed_type` property can NOT resolve to the original Python
|
|
||||||
type.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# NOTE: intentionally do NOT call
|
|
||||||
# `reg_err_types([UnregisteredAppError])`
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
async with tractor.open_nursery(
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
enable_transports=[tpt_proto],
|
|
||||||
) as an:
|
|
||||||
ptl: Portal = await an.start_actor(
|
|
||||||
'unreg-err-raiser',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
async with ptl.open_context(
|
|
||||||
raise_unreg_err,
|
|
||||||
) as (ctx, sent):
|
|
||||||
assert not sent
|
|
||||||
await ctx.wait_for_result()
|
|
||||||
|
|
||||||
await an.cancel()
|
|
||||||
|
|
||||||
with pytest.raises(RemoteActorError) as excinfo:
|
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
rae = excinfo.value
|
|
||||||
|
|
||||||
# the error IS relayed even without
|
|
||||||
# registration; type obj is unresolvable but
|
|
||||||
# all string-level info is preserved.
|
|
||||||
assert rae.boxed_type is None # NOT `UnregisteredAppError`
|
|
||||||
assert rae.src_type is None
|
|
||||||
|
|
||||||
# string names survive the IPC round-trip
|
|
||||||
# via the `Error` msg fields.
|
|
||||||
assert (
|
|
||||||
rae.src_type_str
|
|
||||||
==
|
|
||||||
'UnregisteredAppError'
|
|
||||||
)
|
|
||||||
assert (
|
|
||||||
rae.boxed_type_str
|
|
||||||
==
|
|
||||||
'UnregisteredAppError'
|
|
||||||
)
|
|
||||||
|
|
||||||
# original traceback content is preserved
|
|
||||||
assert 'this error type is unknown' in rae.tb_str
|
|
||||||
assert 'UnregisteredAppError' in rae.tb_str
|
|
||||||
|
|
@ -12,14 +12,14 @@ import trio
|
||||||
import tractor
|
import tractor
|
||||||
from tractor.trionics import (
|
from tractor.trionics import (
|
||||||
maybe_open_context,
|
maybe_open_context,
|
||||||
collapse_eg,
|
|
||||||
)
|
)
|
||||||
from tractor.log import (
|
from tractor.log import (
|
||||||
get_console_log,
|
get_console_log,
|
||||||
get_logger,
|
get_logger,
|
||||||
)
|
)
|
||||||
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
log = get_logger()
|
|
||||||
|
|
||||||
_resource: int = 0
|
_resource: int = 0
|
||||||
|
|
||||||
|
|
@ -213,12 +213,9 @@ def test_open_local_sub_to_stream(
|
||||||
N local tasks using `trionics.maybe_open_context()`.
|
N local tasks using `trionics.maybe_open_context()`.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
from .conftest import cpu_scaling_factor
|
timeout: float = 3.6
|
||||||
timeout: float = (
|
if platform.system() == "Windows":
|
||||||
4
|
timeout: float = 10
|
||||||
if not platform.system() == "Windows"
|
|
||||||
else 10
|
|
||||||
) * cpu_scaling_factor()
|
|
||||||
|
|
||||||
if debug_mode:
|
if debug_mode:
|
||||||
timeout = 999
|
timeout = 999
|
||||||
|
|
@ -322,7 +319,7 @@ def test_open_local_sub_to_stream(
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def maybe_cancel_outer_cs(
|
async def cancel_outer_cs(
|
||||||
cs: trio.CancelScope|None = None,
|
cs: trio.CancelScope|None = None,
|
||||||
delay: float = 0,
|
delay: float = 0,
|
||||||
):
|
):
|
||||||
|
|
@ -336,31 +333,12 @@ async def maybe_cancel_outer_cs(
|
||||||
if cs:
|
if cs:
|
||||||
log.info('task calling cs.cancel()')
|
log.info('task calling cs.cancel()')
|
||||||
cs.cancel()
|
cs.cancel()
|
||||||
|
trio.lowlevel.checkpoint()
|
||||||
yield
|
yield
|
||||||
|
|
||||||
if cs:
|
|
||||||
await trio.sleep_forever()
|
await trio.sleep_forever()
|
||||||
|
|
||||||
# XXX, if not cancelled we'll leak this inf-blocking
|
|
||||||
# subtask to the actor's service tn..
|
|
||||||
else:
|
|
||||||
await trio.lowlevel.checkpoint()
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'delay',
|
|
||||||
[0.05, 0.5, 1],
|
|
||||||
ids="pre_sleep_delay={}".format,
|
|
||||||
)
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'cancel_by_cs',
|
|
||||||
[True, False],
|
|
||||||
ids="cancel_by_cs={}".format,
|
|
||||||
)
|
|
||||||
def test_lock_not_corrupted_on_fast_cancel(
|
def test_lock_not_corrupted_on_fast_cancel(
|
||||||
delay: float,
|
|
||||||
cancel_by_cs: bool,
|
|
||||||
debug_mode: bool,
|
debug_mode: bool,
|
||||||
loglevel: str,
|
loglevel: str,
|
||||||
):
|
):
|
||||||
|
|
@ -377,14 +355,17 @@ def test_lock_not_corrupted_on_fast_cancel(
|
||||||
due to it having erronously exited without calling
|
due to it having erronously exited without calling
|
||||||
`lock.release()`.
|
`lock.release()`.
|
||||||
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
|
delay: float = 1.
|
||||||
|
|
||||||
async def use_moc(
|
async def use_moc(
|
||||||
|
cs: trio.CancelScope|None,
|
||||||
delay: float,
|
delay: float,
|
||||||
cs: trio.CancelScope|None = None,
|
|
||||||
):
|
):
|
||||||
log.info('task entering moc')
|
log.info('task entering moc')
|
||||||
async with maybe_open_context(
|
async with maybe_open_context(
|
||||||
maybe_cancel_outer_cs,
|
cancel_outer_cs,
|
||||||
kwargs={
|
kwargs={
|
||||||
'cs': cs,
|
'cs': cs,
|
||||||
'delay': delay,
|
'delay': delay,
|
||||||
|
|
@ -395,14 +376,8 @@ def test_lock_not_corrupted_on_fast_cancel(
|
||||||
else:
|
else:
|
||||||
log.info('1st task entered')
|
log.info('1st task entered')
|
||||||
|
|
||||||
if cs:
|
|
||||||
await trio.sleep_forever()
|
await trio.sleep_forever()
|
||||||
|
|
||||||
else:
|
|
||||||
await trio.sleep(delay)
|
|
||||||
|
|
||||||
# ^END, exit shared ctx.
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
with trio.fail_after(delay + 2):
|
with trio.fail_after(delay + 2):
|
||||||
async with (
|
async with (
|
||||||
|
|
@ -410,7 +385,6 @@ def test_lock_not_corrupted_on_fast_cancel(
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
),
|
),
|
||||||
# ?TODO, pass this as the parent tn?
|
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
):
|
):
|
||||||
get_console_log('info')
|
get_console_log('info')
|
||||||
|
|
@ -418,206 +392,15 @@ def test_lock_not_corrupted_on_fast_cancel(
|
||||||
cs = tn.cancel_scope
|
cs = tn.cancel_scope
|
||||||
tn.start_soon(
|
tn.start_soon(
|
||||||
use_moc,
|
use_moc,
|
||||||
|
cs,
|
||||||
delay,
|
delay,
|
||||||
cs if cancel_by_cs else None,
|
|
||||||
name='child',
|
name='child',
|
||||||
)
|
)
|
||||||
with trio.CancelScope() as rent_cs:
|
with trio.CancelScope() as rent_cs:
|
||||||
await use_moc(
|
await use_moc(
|
||||||
|
cs=rent_cs,
|
||||||
delay=delay,
|
delay=delay,
|
||||||
cs=rent_cs if cancel_by_cs else None,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
|
|
||||||
@acm
|
|
||||||
async def acm_with_resource(resource_id: str):
|
|
||||||
'''
|
|
||||||
Yield `resource_id` as the cached value.
|
|
||||||
|
|
||||||
Used to verify per-`ctx_key` isolation when the same
|
|
||||||
`acm_func` is called with different kwargs.
|
|
||||||
|
|
||||||
'''
|
|
||||||
yield resource_id
|
|
||||||
|
|
||||||
|
|
||||||
def test_per_ctx_key_resource_lifecycle(
|
|
||||||
debug_mode: bool,
|
|
||||||
loglevel: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify that `maybe_open_context()` correctly isolates resource
|
|
||||||
lifecycle **per `ctx_key`** when the same `acm_func` is called
|
|
||||||
with different kwargs.
|
|
||||||
|
|
||||||
Previously `_Cache.users` was a single global `int` and
|
|
||||||
`_Cache.locks` was keyed on `fid` (function ID), so calling
|
|
||||||
the same `acm_func` with different kwargs (producing different
|
|
||||||
`ctx_key`s) meant:
|
|
||||||
|
|
||||||
- teardown for one key was skipped bc the *other* key's users
|
|
||||||
kept the global count > 0,
|
|
||||||
- and re-entry could hit the old
|
|
||||||
`assert not resources.get(ctx_key)` crash during the
|
|
||||||
teardown window.
|
|
||||||
|
|
||||||
This was the root cause of a long-standing bug in piker's
|
|
||||||
`brokerd.kraken` backend.
|
|
||||||
|
|
||||||
'''
|
|
||||||
timeout: float = 6
|
|
||||||
if debug_mode:
|
|
||||||
timeout = 999
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
a_ready = trio.Event()
|
|
||||||
a_exit = trio.Event()
|
|
||||||
|
|
||||||
async def hold_resource_a():
|
|
||||||
'''
|
|
||||||
Open resource 'a' and keep it alive until signalled.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async with maybe_open_context(
|
|
||||||
acm_with_resource,
|
|
||||||
kwargs={'resource_id': 'a'},
|
|
||||||
) as (cache_hit, value):
|
|
||||||
assert not cache_hit
|
|
||||||
assert value == 'a'
|
|
||||||
log.info("resource 'a' entered (holding)")
|
|
||||||
a_ready.set()
|
|
||||||
await a_exit.wait()
|
|
||||||
log.info("resource 'a' exiting")
|
|
||||||
|
|
||||||
with trio.fail_after(timeout):
|
|
||||||
async with (
|
|
||||||
tractor.open_root_actor(
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
loglevel=loglevel,
|
|
||||||
),
|
|
||||||
trio.open_nursery() as tn,
|
|
||||||
):
|
|
||||||
# Phase 1: bg task holds resource 'a' open.
|
|
||||||
tn.start_soon(hold_resource_a)
|
|
||||||
await a_ready.wait()
|
|
||||||
|
|
||||||
# Phase 2: open resource 'b' (different kwargs,
|
|
||||||
# same acm_func) then exit it while 'a' is still
|
|
||||||
# alive.
|
|
||||||
async with maybe_open_context(
|
|
||||||
acm_with_resource,
|
|
||||||
kwargs={'resource_id': 'b'},
|
|
||||||
) as (cache_hit, value):
|
|
||||||
assert not cache_hit
|
|
||||||
assert value == 'b'
|
|
||||||
log.info("resource 'b' entered")
|
|
||||||
|
|
||||||
log.info("resource 'b' exited, waiting for teardown")
|
|
||||||
await trio.lowlevel.checkpoint()
|
|
||||||
|
|
||||||
# Phase 3: re-open 'b'; must be a fresh cache MISS
|
|
||||||
# proving 'b' was torn down independently of 'a'.
|
|
||||||
#
|
|
||||||
# With the old global `_Cache.users` counter this
|
|
||||||
# would be a stale cache HIT (leaked resource) or
|
|
||||||
# trigger `assert not resources.get(ctx_key)`.
|
|
||||||
async with maybe_open_context(
|
|
||||||
acm_with_resource,
|
|
||||||
kwargs={'resource_id': 'b'},
|
|
||||||
) as (cache_hit, value):
|
|
||||||
assert not cache_hit, (
|
|
||||||
"resource 'b' was NOT torn down despite "
|
|
||||||
"having zero users! (global user count bug)"
|
|
||||||
)
|
|
||||||
assert value == 'b'
|
|
||||||
log.info(
|
|
||||||
"resource 'b' re-entered "
|
|
||||||
"(cache miss, correct)"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Phase 4: let 'a' exit, clean shutdown.
|
|
||||||
a_exit.set()
|
|
||||||
|
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
|
|
||||||
def test_moc_reentry_during_teardown(
|
|
||||||
debug_mode: bool,
|
|
||||||
loglevel: str,
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Reproduce the piker `open_cached_client('kraken')` race:
|
|
||||||
|
|
||||||
- same `acm_func`, NO kwargs (identical `ctx_key`)
|
|
||||||
- multiple tasks share the cached resource
|
|
||||||
- all users exit -> teardown starts
|
|
||||||
- a NEW task enters during `_Cache.run_ctx.__aexit__`
|
|
||||||
- `values[ctx_key]` is gone (popped in inner finally)
|
|
||||||
but `resources[ctx_key]` still exists (outer finally
|
|
||||||
hasn't run yet bc the acm cleanup has checkpoints)
|
|
||||||
- old code: `assert not resources.get(ctx_key)` FIRES
|
|
||||||
|
|
||||||
This models the real-world scenario where `brokerd.kraken`
|
|
||||||
tasks concurrently call `open_cached_client('kraken')`
|
|
||||||
(same `acm_func`, empty kwargs, shared `ctx_key`) and
|
|
||||||
the teardown/re-entry race triggers intermittently.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async def main():
|
|
||||||
in_aexit = trio.Event()
|
|
||||||
|
|
||||||
@acm
|
|
||||||
async def cached_client():
|
|
||||||
'''
|
|
||||||
Simulates `kraken.api.get_client()`:
|
|
||||||
- no params (all callers share one `ctx_key`)
|
|
||||||
- slow-ish cleanup to widen the race window
|
|
||||||
between `values.pop()` and `resources.pop()`
|
|
||||||
inside `_Cache.run_ctx`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
yield 'the-client'
|
|
||||||
# Signal that we're in __aexit__ — at this
|
|
||||||
# point `values` has already been popped by
|
|
||||||
# `run_ctx`'s inner finally, but `resources`
|
|
||||||
# is still alive (outer finally hasn't run).
|
|
||||||
in_aexit.set()
|
|
||||||
await trio.sleep(10)
|
|
||||||
|
|
||||||
first_done = trio.Event()
|
|
||||||
|
|
||||||
async def use_and_exit():
|
|
||||||
async with maybe_open_context(
|
|
||||||
cached_client,
|
|
||||||
) as (cache_hit, value):
|
|
||||||
assert value == 'the-client'
|
|
||||||
first_done.set()
|
|
||||||
|
|
||||||
async def reenter_during_teardown():
|
|
||||||
'''
|
|
||||||
Wait for the acm's `__aexit__` to start (meaning
|
|
||||||
`values` is popped but `resources` still exists),
|
|
||||||
then re-enter — triggering the assert.
|
|
||||||
|
|
||||||
'''
|
|
||||||
await in_aexit.wait()
|
|
||||||
async with maybe_open_context(
|
|
||||||
cached_client,
|
|
||||||
) as (cache_hit, value):
|
|
||||||
assert value == 'the-client'
|
|
||||||
|
|
||||||
with trio.fail_after(5):
|
|
||||||
async with (
|
|
||||||
tractor.open_root_actor(
|
|
||||||
debug_mode=debug_mode,
|
|
||||||
loglevel=loglevel,
|
|
||||||
),
|
|
||||||
collapse_eg(),
|
|
||||||
trio.open_nursery() as tn,
|
|
||||||
):
|
|
||||||
tn.start_soon(use_and_exit)
|
|
||||||
tn.start_soon(reenter_during_teardown)
|
|
||||||
|
|
||||||
trio.run(main)
|
trio.run(main)
|
||||||
|
|
|
||||||
|
|
@ -49,7 +49,7 @@ def test_infected_root_actor(
|
||||||
),
|
),
|
||||||
to_asyncio.open_channel_from(
|
to_asyncio.open_channel_from(
|
||||||
aio_echo_server,
|
aio_echo_server,
|
||||||
) as (chan, first),
|
) as (first, chan),
|
||||||
):
|
):
|
||||||
assert first == 'start'
|
assert first == 'start'
|
||||||
|
|
||||||
|
|
@ -91,12 +91,13 @@ def test_infected_root_actor(
|
||||||
async def sync_and_err(
|
async def sync_and_err(
|
||||||
# just signature placeholders for compat with
|
# just signature placeholders for compat with
|
||||||
# ``to_asyncio.open_channel_from()``
|
# ``to_asyncio.open_channel_from()``
|
||||||
chan: tractor.to_asyncio.LinkedTaskChannel,
|
to_trio: trio.MemorySendChannel,
|
||||||
|
from_trio: asyncio.Queue,
|
||||||
ev: asyncio.Event,
|
ev: asyncio.Event,
|
||||||
|
|
||||||
):
|
):
|
||||||
if chan:
|
if to_trio:
|
||||||
chan.started_nowait('start')
|
to_trio.send_nowait('start')
|
||||||
|
|
||||||
await ev.wait()
|
await ev.wait()
|
||||||
raise RuntimeError('asyncio-side')
|
raise RuntimeError('asyncio-side')
|
||||||
|
|
@ -173,7 +174,7 @@ def test_trio_prestarted_task_bubbles(
|
||||||
sync_and_err,
|
sync_and_err,
|
||||||
ev=aio_ev,
|
ev=aio_ev,
|
||||||
)
|
)
|
||||||
) as (chan, first),
|
) as (first, chan),
|
||||||
):
|
):
|
||||||
|
|
||||||
for i in range(5):
|
for i in range(5):
|
||||||
|
|
|
||||||
|
|
@ -94,15 +94,15 @@ def test_runtime_vars_unset(
|
||||||
after the root actor-runtime exits!
|
after the root actor-runtime exits!
|
||||||
|
|
||||||
'''
|
'''
|
||||||
assert not tractor.runtime._state._runtime_vars['_debug_mode']
|
assert not tractor._state._runtime_vars['_debug_mode']
|
||||||
async def main():
|
async def main():
|
||||||
assert not tractor.runtime._state._runtime_vars['_debug_mode']
|
assert not tractor._state._runtime_vars['_debug_mode']
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
debug_mode=True,
|
debug_mode=True,
|
||||||
):
|
):
|
||||||
assert tractor.runtime._state._runtime_vars['_debug_mode']
|
assert tractor._state._runtime_vars['_debug_mode']
|
||||||
|
|
||||||
# after runtime closure, should be reverted!
|
# after runtime closure, should be reverted!
|
||||||
assert not tractor.runtime._state._runtime_vars['_debug_mode']
|
assert not tractor._state._runtime_vars['_debug_mode']
|
||||||
|
|
||||||
trio.run(main)
|
trio.run(main)
|
||||||
|
|
|
||||||
|
|
@ -110,7 +110,7 @@ def test_rpc_errors(
|
||||||
) as n:
|
) as n:
|
||||||
|
|
||||||
actor = tractor.current_actor()
|
actor = tractor.current_actor()
|
||||||
assert actor.is_registrar
|
assert actor.is_arbiter
|
||||||
await n.run_in_actor(
|
await n.run_in_actor(
|
||||||
sleep_back_actor,
|
sleep_back_actor,
|
||||||
actor_name=subactor_requests_to,
|
actor_name=subactor_requests_to,
|
||||||
|
|
|
||||||
|
|
@ -22,10 +22,6 @@ def unlink_file():
|
||||||
async def crash_and_clean_tmpdir(
|
async def crash_and_clean_tmpdir(
|
||||||
tmp_file_path: str,
|
tmp_file_path: str,
|
||||||
error: bool = True,
|
error: bool = True,
|
||||||
rent_cancel: bool = True,
|
|
||||||
|
|
||||||
# XXX unused, but do we really need to test these cases?
|
|
||||||
self_cancel: bool = False,
|
|
||||||
):
|
):
|
||||||
global _file_path
|
global _file_path
|
||||||
_file_path = tmp_file_path
|
_file_path = tmp_file_path
|
||||||
|
|
@ -36,75 +32,43 @@ async def crash_and_clean_tmpdir(
|
||||||
assert os.path.isfile(tmp_file_path)
|
assert os.path.isfile(tmp_file_path)
|
||||||
await trio.sleep(0.1)
|
await trio.sleep(0.1)
|
||||||
if error:
|
if error:
|
||||||
print('erroring in subactor!')
|
|
||||||
assert 0
|
assert 0
|
||||||
|
else:
|
||||||
elif self_cancel:
|
|
||||||
print('SELF-cancelling subactor!')
|
|
||||||
actor.cancel_soon()
|
actor.cancel_soon()
|
||||||
|
|
||||||
elif rent_cancel:
|
|
||||||
await trio.sleep_forever()
|
|
||||||
|
|
||||||
print('subactor exiting task!')
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
'error_in_child',
|
'error_in_child',
|
||||||
[True, False],
|
[True, False],
|
||||||
ids='error_in_child={}'.format,
|
|
||||||
)
|
)
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_lifetime_stack_wipes_tmpfile(
|
async def test_lifetime_stack_wipes_tmpfile(
|
||||||
tmp_path,
|
tmp_path,
|
||||||
error_in_child: bool,
|
error_in_child: bool,
|
||||||
loglevel: str,
|
|
||||||
# log: tractor.log.StackLevelAdapter,
|
|
||||||
# ^TODO, once landed via macos support!
|
|
||||||
):
|
):
|
||||||
child_tmp_file = tmp_path / "child.txt"
|
child_tmp_file = tmp_path / "child.txt"
|
||||||
child_tmp_file.touch()
|
child_tmp_file.touch()
|
||||||
assert child_tmp_file.exists()
|
assert child_tmp_file.exists()
|
||||||
path = str(child_tmp_file)
|
path = str(child_tmp_file)
|
||||||
|
|
||||||
# NOTE, this is expected to cancel the sub
|
|
||||||
# in the `error_in_child=False` case!
|
|
||||||
timeout: float = (
|
|
||||||
1.6 if error_in_child
|
|
||||||
else 1
|
|
||||||
)
|
|
||||||
try:
|
try:
|
||||||
with trio.move_on_after(timeout) as cs:
|
with trio.move_on_after(0.5):
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery() as n:
|
||||||
loglevel=loglevel,
|
await ( # inlined portal
|
||||||
) as an:
|
await n.run_in_actor(
|
||||||
await ( # inlined `tractor.Portal`
|
|
||||||
await an.run_in_actor(
|
|
||||||
crash_and_clean_tmpdir,
|
crash_and_clean_tmpdir,
|
||||||
tmp_file_path=path,
|
tmp_file_path=path,
|
||||||
error=error_in_child,
|
error=error_in_child,
|
||||||
)
|
)
|
||||||
).result()
|
).result()
|
||||||
|
|
||||||
except (
|
except (
|
||||||
tractor.RemoteActorError,
|
tractor.RemoteActorError,
|
||||||
|
# tractor.BaseExceptionGroup,
|
||||||
BaseExceptionGroup,
|
BaseExceptionGroup,
|
||||||
) as _exc:
|
):
|
||||||
exc = _exc
|
pass
|
||||||
from tractor.log import get_console_log
|
|
||||||
log = get_console_log(
|
|
||||||
level=loglevel,
|
|
||||||
name=__name__,
|
|
||||||
)
|
|
||||||
log.exception(
|
|
||||||
f'Subactor failed as expected with {type(exc)!r}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# tmp file should have been wiped by
|
# tmp file should have been wiped by
|
||||||
# teardown stack.
|
# teardown stack.
|
||||||
assert not child_tmp_file.exists()
|
assert not child_tmp_file.exists()
|
||||||
|
|
||||||
if error_in_child:
|
|
||||||
assert not cs.cancel_called
|
|
||||||
else:
|
|
||||||
# expect timeout in some cases?
|
|
||||||
assert cs.cancel_called
|
|
||||||
|
|
|
||||||
|
|
@ -2,7 +2,6 @@
|
||||||
Shared mem primitives and APIs.
|
Shared mem primitives and APIs.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
import platform
|
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
# import numpy
|
# import numpy
|
||||||
|
|
@ -54,18 +53,7 @@ def test_child_attaches_alot():
|
||||||
shm_key=shml.key,
|
shm_key=shml.key,
|
||||||
) as (ctx, start_val),
|
) as (ctx, start_val),
|
||||||
):
|
):
|
||||||
assert (_key := shml.key) == start_val
|
assert start_val == key
|
||||||
|
|
||||||
if platform.system() != 'Darwin':
|
|
||||||
# XXX, macOS has a char limit..
|
|
||||||
# see `ipc._shm._shorten_key_for_macos`
|
|
||||||
assert (
|
|
||||||
start_val
|
|
||||||
==
|
|
||||||
key
|
|
||||||
==
|
|
||||||
_key
|
|
||||||
)
|
|
||||||
await ctx.result()
|
await ctx.result()
|
||||||
|
|
||||||
await portal.cancel_actor()
|
await portal.cancel_actor()
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,5 @@
|
||||||
"""
|
"""
|
||||||
Spawning basics including audit of,
|
Spawning basics
|
||||||
|
|
||||||
- subproc bootstrap, such as subactor runtime-data/config inheritance,
|
|
||||||
- basic (and mostly legacy) `ActorNursery` subactor starting and
|
|
||||||
cancel APIs.
|
|
||||||
|
|
||||||
Simple (and generally legacy) examples from the original
|
|
||||||
API design.
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
@ -40,13 +33,13 @@ async def spawn(
|
||||||
assert actor is None # no runtime yet
|
assert actor is None # no runtime yet
|
||||||
async with (
|
async with (
|
||||||
tractor.open_root_actor(
|
tractor.open_root_actor(
|
||||||
registry_addrs=[reg_addr],
|
arbiter_addr=reg_addr,
|
||||||
),
|
),
|
||||||
tractor.open_nursery() as an,
|
tractor.open_nursery() as an,
|
||||||
):
|
):
|
||||||
# now runtime exists
|
# now runtime exists
|
||||||
actor: tractor.Actor = tractor.current_actor()
|
actor: tractor.Actor = tractor.current_actor()
|
||||||
assert actor.is_registrar == should_be_root
|
assert actor.is_arbiter == should_be_root
|
||||||
|
|
||||||
# spawns subproc here
|
# spawns subproc here
|
||||||
portal: tractor.Portal = await an.run_in_actor(
|
portal: tractor.Portal = await an.run_in_actor(
|
||||||
|
|
@ -75,7 +68,7 @@ async def spawn(
|
||||||
assert result == 10
|
assert result == 10
|
||||||
return result
|
return result
|
||||||
else:
|
else:
|
||||||
assert actor.is_registrar == should_be_root
|
assert actor.is_arbiter == should_be_root
|
||||||
return 10
|
return 10
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -105,9 +98,7 @@ async def movie_theatre_question():
|
||||||
|
|
||||||
|
|
||||||
@tractor_test
|
@tractor_test
|
||||||
async def test_movie_theatre_convo(
|
async def test_movie_theatre_convo(start_method):
|
||||||
start_method: str,
|
|
||||||
):
|
|
||||||
'''
|
'''
|
||||||
The main ``tractor`` routine.
|
The main ``tractor`` routine.
|
||||||
|
|
||||||
|
|
@ -160,16 +151,13 @@ async def test_most_beautiful_word(
|
||||||
name='some_linguist',
|
name='some_linguist',
|
||||||
)
|
)
|
||||||
|
|
||||||
res: Any = await portal.wait_for_result()
|
print(await portal.result())
|
||||||
assert res == return_value
|
|
||||||
# The ``async with`` will unblock here since the 'some_linguist'
|
# The ``async with`` will unblock here since the 'some_linguist'
|
||||||
# actor has completed its main task ``cellar_door``.
|
# actor has completed its main task ``cellar_door``.
|
||||||
|
|
||||||
# this should pull the cached final result already captured during
|
# this should pull the cached final result already captured during
|
||||||
# the nursery block exit.
|
# the nursery block exit.
|
||||||
res: Any = await portal.wait_for_result()
|
print(await portal.result())
|
||||||
assert res == return_value
|
|
||||||
print(res)
|
|
||||||
|
|
||||||
|
|
||||||
async def check_loglevel(level):
|
async def check_loglevel(level):
|
||||||
|
|
@ -180,30 +168,22 @@ async def check_loglevel(level):
|
||||||
log.critical('yoyoyo')
|
log.critical('yoyoyo')
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
'level', [
|
|
||||||
'debug',
|
|
||||||
'cancel',
|
|
||||||
'critical'
|
|
||||||
],
|
|
||||||
ids='loglevel={}'.format,
|
|
||||||
)
|
|
||||||
def test_loglevel_propagated_to_subactor(
|
def test_loglevel_propagated_to_subactor(
|
||||||
capfd: pytest.CaptureFixture,
|
start_method,
|
||||||
start_method: str,
|
capfd,
|
||||||
reg_addr: tuple,
|
reg_addr,
|
||||||
level: str,
|
|
||||||
):
|
):
|
||||||
if start_method == 'mp_forkserver':
|
if start_method == 'mp_forkserver':
|
||||||
pytest.skip(
|
pytest.skip(
|
||||||
"a bug with `capfd` seems to make forkserver capture not work?"
|
"a bug with `capfd` seems to make forkserver capture not work?")
|
||||||
)
|
|
||||||
|
level = 'critical'
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
name='registrar',
|
name='arbiter',
|
||||||
start_method=start_method,
|
start_method=start_method,
|
||||||
registry_addrs=[reg_addr],
|
arbiter_addr=reg_addr,
|
||||||
|
|
||||||
) as tn:
|
) as tn:
|
||||||
await tn.run_in_actor(
|
await tn.run_in_actor(
|
||||||
|
|
@ -217,121 +197,3 @@ def test_loglevel_propagated_to_subactor(
|
||||||
# ensure subactor spits log message on stderr
|
# ensure subactor spits log message on stderr
|
||||||
captured = capfd.readouterr()
|
captured = capfd.readouterr()
|
||||||
assert 'yoyoyo' in captured.err
|
assert 'yoyoyo' in captured.err
|
||||||
|
|
||||||
|
|
||||||
async def check_parent_main_inheritance(
|
|
||||||
expect_inherited: bool,
|
|
||||||
) -> bool:
|
|
||||||
'''
|
|
||||||
Assert that the child actor's ``_parent_main_data`` matches the
|
|
||||||
``inherit_parent_main`` flag it was spawned with.
|
|
||||||
|
|
||||||
With the trio spawn backend the parent's ``__main__`` bootstrap
|
|
||||||
data is captured and forwarded to each child so it can replay
|
|
||||||
the parent's ``__main__`` as ``__mp_main__``, mirroring the
|
|
||||||
stdlib ``multiprocessing`` bootstrap:
|
|
||||||
https://docs.python.org/3/library/multiprocessing.html#the-spawn-and-forkserver-start-methods
|
|
||||||
|
|
||||||
When ``inherit_parent_main=False`` the data dict is empty
|
|
||||||
(``{}``) so no fixup ever runs and the child keeps its own
|
|
||||||
``__main__`` untouched.
|
|
||||||
|
|
||||||
NOTE: under `pytest` the parent ``__main__`` is
|
|
||||||
``pytest.__main__`` whose ``_fixup_main_from_name()`` is a no-op
|
|
||||||
(the name ends with ``.__main__``), so we cannot observe
|
|
||||||
a difference in ``sys.modules['__main__'].__name__`` between the
|
|
||||||
two modes. Checking ``_parent_main_data`` directly is the most
|
|
||||||
reliable verification that the flag is threaded through
|
|
||||||
correctly; a ``RemoteActorError[AssertionError]`` propagates on
|
|
||||||
mismatch.
|
|
||||||
|
|
||||||
'''
|
|
||||||
import tractor
|
|
||||||
actor: tractor.Actor = tractor.current_actor()
|
|
||||||
has_data: bool = bool(actor._parent_main_data)
|
|
||||||
assert has_data == expect_inherited, (
|
|
||||||
f'Expected _parent_main_data to be '
|
|
||||||
f'{"non-empty" if expect_inherited else "empty"}, '
|
|
||||||
f'got: {actor._parent_main_data!r}'
|
|
||||||
)
|
|
||||||
return has_data
|
|
||||||
|
|
||||||
|
|
||||||
def test_run_in_actor_can_skip_parent_main_inheritance(
|
|
||||||
start_method: str, # <- only support on `trio` backend rn.
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify ``inherit_parent_main=False`` on ``run_in_actor()``
|
|
||||||
prevents parent ``__main__`` data from reaching the child.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if start_method != 'trio':
|
|
||||||
pytest.skip(
|
|
||||||
'parent main-inheritance opt-out only affects the trio backend'
|
|
||||||
)
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
async with tractor.open_nursery(start_method='trio') as an:
|
|
||||||
|
|
||||||
# Default: child receives parent __main__ bootstrap data
|
|
||||||
replaying = await an.run_in_actor(
|
|
||||||
check_parent_main_inheritance,
|
|
||||||
name='replaying-parent-main',
|
|
||||||
expect_inherited=True,
|
|
||||||
)
|
|
||||||
await replaying.result()
|
|
||||||
|
|
||||||
# Opt-out: child gets no parent __main__ data
|
|
||||||
isolated = await an.run_in_actor(
|
|
||||||
check_parent_main_inheritance,
|
|
||||||
name='isolated-parent-main',
|
|
||||||
inherit_parent_main=False,
|
|
||||||
expect_inherited=False,
|
|
||||||
)
|
|
||||||
await isolated.result()
|
|
||||||
|
|
||||||
trio.run(main)
|
|
||||||
|
|
||||||
|
|
||||||
def test_start_actor_can_skip_parent_main_inheritance(
|
|
||||||
start_method: str, # <- only support on `trio` backend rn.
|
|
||||||
):
|
|
||||||
'''
|
|
||||||
Verify ``inherit_parent_main=False`` on ``start_actor()``
|
|
||||||
prevents parent ``__main__`` data from reaching the child.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if start_method != 'trio':
|
|
||||||
pytest.skip(
|
|
||||||
'parent main-inheritance opt-out only affects the trio backend'
|
|
||||||
)
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
async with tractor.open_nursery(start_method='trio') as an:
|
|
||||||
|
|
||||||
# Default: child receives parent __main__ bootstrap data
|
|
||||||
replaying = await an.start_actor(
|
|
||||||
'replaying-parent-main',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
)
|
|
||||||
result = await replaying.run(
|
|
||||||
check_parent_main_inheritance,
|
|
||||||
expect_inherited=True,
|
|
||||||
)
|
|
||||||
assert result is True
|
|
||||||
await replaying.cancel_actor()
|
|
||||||
|
|
||||||
# Opt-out: child gets no parent __main__ data
|
|
||||||
isolated = await an.start_actor(
|
|
||||||
'isolated-parent-main',
|
|
||||||
enable_modules=[__name__],
|
|
||||||
inherit_parent_main=False,
|
|
||||||
)
|
|
||||||
result = await isolated.run(
|
|
||||||
check_parent_main_inheritance,
|
|
||||||
expect_inherited=False,
|
|
||||||
)
|
|
||||||
assert result is False
|
|
||||||
await isolated.cancel_actor()
|
|
||||||
|
|
||||||
trio.run(main)
|
|
||||||
|
|
|
||||||
|
|
@ -75,7 +75,7 @@ async def open_sequence_streamer(
|
||||||
) -> tractor.MsgStream:
|
) -> tractor.MsgStream:
|
||||||
|
|
||||||
async with tractor.open_nursery(
|
async with tractor.open_nursery(
|
||||||
registry_addrs=[reg_addr],
|
arbiter_addr=reg_addr,
|
||||||
start_method=start_method,
|
start_method=start_method,
|
||||||
) as an:
|
) as an:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -30,23 +30,21 @@ from ._streaming import (
|
||||||
MsgStream as MsgStream,
|
MsgStream as MsgStream,
|
||||||
stream as stream,
|
stream as stream,
|
||||||
)
|
)
|
||||||
from .discovery._api import (
|
from ._discovery import (
|
||||||
get_registry as get_registry,
|
get_registry as get_registry,
|
||||||
find_actor as find_actor,
|
find_actor as find_actor,
|
||||||
wait_for_actor as wait_for_actor,
|
wait_for_actor as wait_for_actor,
|
||||||
query_actor as query_actor,
|
query_actor as query_actor,
|
||||||
)
|
)
|
||||||
from .runtime._supervise import (
|
from ._supervise import (
|
||||||
open_nursery as open_nursery,
|
open_nursery as open_nursery,
|
||||||
ActorNursery as ActorNursery,
|
ActorNursery as ActorNursery,
|
||||||
)
|
)
|
||||||
from .runtime._state import (
|
from ._state import (
|
||||||
RuntimeVars as RuntimeVars,
|
|
||||||
current_actor as current_actor,
|
current_actor as current_actor,
|
||||||
current_ipc_ctx as current_ipc_ctx,
|
|
||||||
debug_mode as debug_mode,
|
|
||||||
get_runtime_vars as get_runtime_vars,
|
|
||||||
is_root_process as is_root_process,
|
is_root_process as is_root_process,
|
||||||
|
current_ipc_ctx as current_ipc_ctx,
|
||||||
|
debug_mode as debug_mode
|
||||||
)
|
)
|
||||||
from ._exceptions import (
|
from ._exceptions import (
|
||||||
ContextCancelled as ContextCancelled,
|
ContextCancelled as ContextCancelled,
|
||||||
|
|
@ -67,10 +65,6 @@ from ._root import (
|
||||||
open_root_actor as open_root_actor,
|
open_root_actor as open_root_actor,
|
||||||
)
|
)
|
||||||
from .ipc import Channel as Channel
|
from .ipc import Channel as Channel
|
||||||
from .runtime._portal import Portal as Portal
|
from ._portal import Portal as Portal
|
||||||
from .runtime._runtime import Actor as Actor
|
from ._runtime import Actor as Actor
|
||||||
from .discovery._registry import (
|
|
||||||
Registrar as Registrar,
|
|
||||||
Arbiter as Arbiter,
|
|
||||||
)
|
|
||||||
# from . import hilevel as hilevel
|
# from . import hilevel as hilevel
|
||||||
|
|
|
||||||
|
|
@ -27,17 +27,17 @@ from trio import (
|
||||||
SocketListener,
|
SocketListener,
|
||||||
)
|
)
|
||||||
|
|
||||||
from ..log import get_logger
|
from .log import get_logger
|
||||||
from ..runtime._state import (
|
from ._state import (
|
||||||
_def_tpt_proto,
|
_def_tpt_proto,
|
||||||
)
|
)
|
||||||
from ..ipc._tcp import TCPAddress
|
from .ipc._tcp import TCPAddress
|
||||||
from ..ipc._uds import UDSAddress
|
from .ipc._uds import UDSAddress
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ..runtime._runtime import Actor
|
from ._runtime import Actor
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# TODO, maybe breakout the netns key to a struct?
|
# TODO, maybe breakout the netns key to a struct?
|
||||||
|
|
@ -206,7 +206,7 @@ def mk_uuid() -> str:
|
||||||
|
|
||||||
|
|
||||||
def wrap_address(
|
def wrap_address(
|
||||||
addr: UnwrappedAddress|str,
|
addr: UnwrappedAddress
|
||||||
) -> Address:
|
) -> Address:
|
||||||
'''
|
'''
|
||||||
Wrap an `UnwrappedAddress` as an `Address`-type based
|
Wrap an `UnwrappedAddress` as an `Address`-type based
|
||||||
|
|
@ -257,18 +257,8 @@ def wrap_address(
|
||||||
cls: Type[Address] = get_address_cls(_def_tpt_proto)
|
cls: Type[Address] = get_address_cls(_def_tpt_proto)
|
||||||
addr: UnwrappedAddress = cls.get_root().unwrap()
|
addr: UnwrappedAddress = cls.get_root().unwrap()
|
||||||
|
|
||||||
# multiaddr-format string, e.g.
|
|
||||||
# '/ip4/127.0.0.1/tcp/1616'
|
|
||||||
case str() if addr.startswith('/'):
|
|
||||||
from tractor.discovery._multiaddr import (
|
|
||||||
parse_maddr,
|
|
||||||
)
|
|
||||||
return parse_maddr(addr)
|
|
||||||
|
|
||||||
case _:
|
case _:
|
||||||
# import pdbp; pdbp.set_trace()
|
# import pdbp; pdbp.set_trace()
|
||||||
# from tractor.devx import mk_pdb
|
|
||||||
# mk_pdb().set_trace()
|
|
||||||
raise TypeError(
|
raise TypeError(
|
||||||
f'Can not wrap unwrapped-address ??\n'
|
f'Can not wrap unwrapped-address ??\n'
|
||||||
f'type(addr): {type(addr)!r}\n'
|
f'type(addr): {type(addr)!r}\n'
|
||||||
|
|
@ -22,8 +22,8 @@ import argparse
|
||||||
|
|
||||||
from ast import literal_eval
|
from ast import literal_eval
|
||||||
|
|
||||||
from .runtime._runtime import Actor
|
from ._runtime import Actor
|
||||||
from .spawn._entry import _trio_main
|
from ._entry import _trio_main
|
||||||
|
|
||||||
|
|
||||||
def parse_uid(arg):
|
def parse_uid(arg):
|
||||||
|
|
|
||||||
|
|
@ -66,7 +66,7 @@ async def open_actor_cluster(
|
||||||
trio.open_nursery() as tn,
|
trio.open_nursery() as tn,
|
||||||
tractor.trionics.maybe_raise_from_masking_exc()
|
tractor.trionics.maybe_raise_from_masking_exc()
|
||||||
):
|
):
|
||||||
uid = tractor.current_actor().aid.uid
|
uid = tractor.current_actor().uid
|
||||||
|
|
||||||
async def _start(name: str) -> None:
|
async def _start(name: str) -> None:
|
||||||
name = f'{uid[0]}.{name}'
|
name = f'{uid[0]}.{name}'
|
||||||
|
|
|
||||||
|
|
@ -1,48 +0,0 @@
|
||||||
# tractor: structured concurrent "actors".
|
|
||||||
# Copyright 2018-eternity Tyler Goodlet.
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
# You should have received a copy of the GNU Affero General Public License
|
|
||||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
'''
|
|
||||||
(Hot) coad (re-)load utils for python.
|
|
||||||
|
|
||||||
'''
|
|
||||||
import importlib
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
from types import ModuleType
|
|
||||||
|
|
||||||
# ?TODO, move this into internal libs?
|
|
||||||
# -[ ] we already use it in `modden.config._pymod` as well
|
|
||||||
def load_module_from_path(
|
|
||||||
path: Path,
|
|
||||||
module_name: str|None = None,
|
|
||||||
) -> ModuleType:
|
|
||||||
'''
|
|
||||||
Taken from SO,
|
|
||||||
https://stackoverflow.com/a/67208147
|
|
||||||
|
|
||||||
which is based on stdlib docs,
|
|
||||||
https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
|
|
||||||
|
|
||||||
'''
|
|
||||||
module_name = module_name or path.stem
|
|
||||||
spec = importlib.util.spec_from_file_location(
|
|
||||||
module_name,
|
|
||||||
str(path),
|
|
||||||
)
|
|
||||||
module = importlib.util.module_from_spec(spec)
|
|
||||||
sys.modules[module_name] = module
|
|
||||||
spec.loader.exec_module(module)
|
|
||||||
return module
|
|
||||||
|
|
@ -70,7 +70,6 @@ from ._exceptions import (
|
||||||
MsgTypeError,
|
MsgTypeError,
|
||||||
RemoteActorError,
|
RemoteActorError,
|
||||||
StreamOverrun,
|
StreamOverrun,
|
||||||
TransportClosed,
|
|
||||||
pack_from_raise,
|
pack_from_raise,
|
||||||
unpack_error,
|
unpack_error,
|
||||||
)
|
)
|
||||||
|
|
@ -97,7 +96,7 @@ from ._streaming import (
|
||||||
MsgStream,
|
MsgStream,
|
||||||
open_stream_from_ctx,
|
open_stream_from_ctx,
|
||||||
)
|
)
|
||||||
from .runtime._state import (
|
from ._state import (
|
||||||
current_actor,
|
current_actor,
|
||||||
debug_mode,
|
debug_mode,
|
||||||
_ctxvar_Context,
|
_ctxvar_Context,
|
||||||
|
|
@ -107,14 +106,14 @@ from .trionics import (
|
||||||
)
|
)
|
||||||
# ------ - ------
|
# ------ - ------
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .runtime._portal import Portal
|
from ._portal import Portal
|
||||||
from .runtime._runtime import Actor
|
from ._runtime import Actor
|
||||||
from .ipc._transport import MsgTransport
|
from .ipc._transport import MsgTransport
|
||||||
from .devx._frame_stack import (
|
from .devx._frame_stack import (
|
||||||
CallerInfo,
|
CallerInfo,
|
||||||
)
|
)
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class Unresolved:
|
class Unresolved:
|
||||||
|
|
@ -463,11 +462,10 @@ class Context:
|
||||||
|
|
||||||
# self._cancel_called = val
|
# self._cancel_called = val
|
||||||
|
|
||||||
# TODO, use the `Actor.aid: Aid` instead!
|
|
||||||
@property
|
@property
|
||||||
def canceller(self) -> tuple[str, str]|None:
|
def canceller(self) -> tuple[str, str]|None:
|
||||||
'''
|
'''
|
||||||
`Actor.aid.uid: tuple[str, str]` of the (remote)
|
`Actor.uid: tuple[str, str]` of the (remote)
|
||||||
actor-process who's task was cancelled thus causing this
|
actor-process who's task was cancelled thus causing this
|
||||||
(side of the) context to also be cancelled.
|
(side of the) context to also be cancelled.
|
||||||
|
|
||||||
|
|
@ -500,12 +498,12 @@ class Context:
|
||||||
if from_uid := re.src_uid:
|
if from_uid := re.src_uid:
|
||||||
from_uid: tuple = tuple(from_uid)
|
from_uid: tuple = tuple(from_uid)
|
||||||
|
|
||||||
our_uid: tuple = self._actor.aid.uid
|
our_uid: tuple = self._actor.uid
|
||||||
our_canceller = self.canceller
|
our_canceller = self.canceller
|
||||||
|
|
||||||
return bool(
|
return bool(
|
||||||
isinstance((ctxc := re), ContextCancelled)
|
isinstance((ctxc := re), ContextCancelled)
|
||||||
and from_uid == self.chan.aid.uid
|
and from_uid == self.chan.uid
|
||||||
and ctxc.canceller == our_uid
|
and ctxc.canceller == our_uid
|
||||||
and our_canceller == our_uid
|
and our_canceller == our_uid
|
||||||
)
|
)
|
||||||
|
|
@ -516,7 +514,7 @@ class Context:
|
||||||
Records whether the task on the remote side of this IPC
|
Records whether the task on the remote side of this IPC
|
||||||
context acknowledged a cancel request via a relayed
|
context acknowledged a cancel request via a relayed
|
||||||
`ContextCancelled` with the `.canceller` attr set to the
|
`ContextCancelled` with the `.canceller` attr set to the
|
||||||
`Actor.aid.uid` of the local actor who's task entered
|
`Actor.uid` of the local actor who's task entered
|
||||||
`Portal.open_context()`.
|
`Portal.open_context()`.
|
||||||
|
|
||||||
This will only be `True` when `.cancel()` is called and
|
This will only be `True` when `.cancel()` is called and
|
||||||
|
|
@ -790,8 +788,8 @@ class Context:
|
||||||
# appropriately.
|
# appropriately.
|
||||||
log.runtime(
|
log.runtime(
|
||||||
'Setting remote error for ctx\n\n'
|
'Setting remote error for ctx\n\n'
|
||||||
f'<= {self.peer_side!r}: {self.chan.aid.reprol()}\n'
|
f'<= {self.peer_side!r}: {self.chan.uid}\n'
|
||||||
f'=> {self.side!r}: {self._actor.aid.reprol()}\n\n'
|
f'=> {self.side!r}: {self._actor.uid}\n\n'
|
||||||
f'{error!r}'
|
f'{error!r}'
|
||||||
)
|
)
|
||||||
self._remote_error: BaseException = error
|
self._remote_error: BaseException = error
|
||||||
|
|
@ -812,7 +810,7 @@ class Context:
|
||||||
# cancelled.
|
# cancelled.
|
||||||
#
|
#
|
||||||
# !TODO, switching to `Actor.aid` here!
|
# !TODO, switching to `Actor.aid` here!
|
||||||
if (canc := error.canceller) == self._actor.aid.uid:
|
if (canc := error.canceller) == self._actor.uid:
|
||||||
whom: str = 'us'
|
whom: str = 'us'
|
||||||
self._canceller = canc
|
self._canceller = canc
|
||||||
else:
|
else:
|
||||||
|
|
@ -1037,7 +1035,7 @@ class Context:
|
||||||
---------
|
---------
|
||||||
- after the far end cancels, the `.cancel()` calling side
|
- after the far end cancels, the `.cancel()` calling side
|
||||||
should receive a `ContextCancelled` with the
|
should receive a `ContextCancelled` with the
|
||||||
`.canceller: tuple` uid set to the current `Actor.aid.uid`.
|
`.canceller: tuple` uid set to the current `Actor.uid`.
|
||||||
|
|
||||||
- timeout (quickly) on failure to rx this ACK error-msg in
|
- timeout (quickly) on failure to rx this ACK error-msg in
|
||||||
an attempt to sidestep 2-generals when the transport
|
an attempt to sidestep 2-generals when the transport
|
||||||
|
|
@ -1066,9 +1064,9 @@ class Context:
|
||||||
)
|
)
|
||||||
reminfo: str = (
|
reminfo: str = (
|
||||||
# ' =>\n'
|
# ' =>\n'
|
||||||
# f'Context.cancel() => {self.chan.aid.uid}\n'
|
# f'Context.cancel() => {self.chan.uid}\n'
|
||||||
f'\n'
|
f'\n'
|
||||||
f'c)=> {self.chan.aid.reprol()}\n'
|
f'c)=> {self.chan.uid}\n'
|
||||||
f' |_[{self.dst_maddr}\n'
|
f' |_[{self.dst_maddr}\n'
|
||||||
f' >> {self.repr_rpc}\n'
|
f' >> {self.repr_rpc}\n'
|
||||||
# f' >> {self._nsf}() -> {codec}[dict]:\n\n'
|
# f' >> {self._nsf}() -> {codec}[dict]:\n\n'
|
||||||
|
|
@ -1212,7 +1210,7 @@ class Context:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
__tracebackhide__: bool = hide_tb
|
__tracebackhide__: bool = hide_tb
|
||||||
peer_uid: tuple = self.chan.aid.uid
|
peer_uid: tuple = self.chan.uid
|
||||||
|
|
||||||
# XXX NOTE XXX: `ContextCancelled`/`StreamOverrun` absorption
|
# XXX NOTE XXX: `ContextCancelled`/`StreamOverrun` absorption
|
||||||
# for "graceful cancellation" case(s):
|
# for "graceful cancellation" case(s):
|
||||||
|
|
@ -1229,7 +1227,7 @@ class Context:
|
||||||
# (`ContextCancelled`) as an expected
|
# (`ContextCancelled`) as an expected
|
||||||
# error-msg-is-cancellation-ack IFF said
|
# error-msg-is-cancellation-ack IFF said
|
||||||
# `remote_error: ContextCancelled` has `.canceller`
|
# `remote_error: ContextCancelled` has `.canceller`
|
||||||
# set to the `Actor.aid.uid` of THIS task (i.e. the
|
# set to the `Actor.uid` of THIS task (i.e. the
|
||||||
# cancellation requesting task's actor is the actor
|
# cancellation requesting task's actor is the actor
|
||||||
# checking whether it should absorb the ctxc).
|
# checking whether it should absorb the ctxc).
|
||||||
self_ctxc: bool = self._is_self_cancelled(remote_error)
|
self_ctxc: bool = self._is_self_cancelled(remote_error)
|
||||||
|
|
@ -1680,7 +1678,7 @@ class Context:
|
||||||
|
|
||||||
elif self._started_called:
|
elif self._started_called:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f'called `.started()` twice on context with {self.chan.aid.uid}'
|
f'called `.started()` twice on context with {self.chan.uid}'
|
||||||
)
|
)
|
||||||
|
|
||||||
started_msg = Started(
|
started_msg = Started(
|
||||||
|
|
@ -1813,7 +1811,7 @@ class Context:
|
||||||
'''
|
'''
|
||||||
cid: str = self.cid
|
cid: str = self.cid
|
||||||
chan: Channel = self.chan
|
chan: Channel = self.chan
|
||||||
from_uid: tuple[str, str] = chan.aid.uid
|
from_uid: tuple[str, str] = chan.uid
|
||||||
send_chan: trio.MemorySendChannel = self._send_chan
|
send_chan: trio.MemorySendChannel = self._send_chan
|
||||||
nsf: NamespacePath = self._nsf
|
nsf: NamespacePath = self._nsf
|
||||||
|
|
||||||
|
|
@ -1954,22 +1952,20 @@ class Context:
|
||||||
# overrun state and that msg isn't stuck in an
|
# overrun state and that msg isn't stuck in an
|
||||||
# overflow queue what happens?!?
|
# overflow queue what happens?!?
|
||||||
|
|
||||||
local_aid = self._actor.aid
|
local_uid = self._actor.uid
|
||||||
txt: str = (
|
txt: str = (
|
||||||
'on IPC context:\n'
|
'on IPC context:\n'
|
||||||
|
|
||||||
f'<= sender: {from_uid}\n'
|
f'<= sender: {from_uid}\n'
|
||||||
f' |_ {self._nsf}()\n\n'
|
f' |_ {self._nsf}()\n\n'
|
||||||
|
|
||||||
f'=> overrun: {local_aid.reprol()!r}\n'
|
f'=> overrun: {local_uid}\n'
|
||||||
f' |_cid: {cid}\n'
|
f' |_cid: {cid}\n'
|
||||||
f' |_task: {self._task}\n'
|
f' |_task: {self._task}\n'
|
||||||
)
|
)
|
||||||
if not self._stream_opened:
|
if not self._stream_opened:
|
||||||
txt += (
|
txt += (
|
||||||
f'\n'
|
f'\n*** No stream open on `{local_uid[0]}` side! ***\n\n'
|
||||||
f'*** No stream open on `{local_aid.name}` side! ***\n'
|
|
||||||
f'\n'
|
|
||||||
f'{msg}\n'
|
f'{msg}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -2118,11 +2114,7 @@ async def open_context_from_portal(
|
||||||
# XXX NOTE XXX: currenly we do NOT allow opening a contex
|
# XXX NOTE XXX: currenly we do NOT allow opening a contex
|
||||||
# with "self" since the local feeder mem-chan processing
|
# with "self" since the local feeder mem-chan processing
|
||||||
# is not built for it.
|
# is not built for it.
|
||||||
if (
|
if (uid := portal.channel.uid) == portal.actor.uid:
|
||||||
(uid := portal.channel.aid.uid)
|
|
||||||
==
|
|
||||||
portal.actor.aid.uid
|
|
||||||
):
|
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
'** !! Invalid Operation !! **\n'
|
'** !! Invalid Operation !! **\n'
|
||||||
'Can not open an IPC ctx with the local actor!\n'
|
'Can not open an IPC ctx with the local actor!\n'
|
||||||
|
|
@ -2336,7 +2328,7 @@ async def open_context_from_portal(
|
||||||
and
|
and
|
||||||
ctxc is ctx._remote_error
|
ctxc is ctx._remote_error
|
||||||
and
|
and
|
||||||
ctxc.canceller == portal.actor.aid.uid
|
ctxc.canceller == portal.actor.uid
|
||||||
):
|
):
|
||||||
log.cancel(
|
log.cancel(
|
||||||
f'Context (cid=[{ctx.cid[-6:]}..] cancelled gracefully with:\n'
|
f'Context (cid=[{ctx.cid[-6:]}..] cancelled gracefully with:\n'
|
||||||
|
|
@ -2399,21 +2391,19 @@ async def open_context_from_portal(
|
||||||
case trio.Cancelled():
|
case trio.Cancelled():
|
||||||
logmeth = log.cancel
|
logmeth = log.cancel
|
||||||
cause: str = 'cancelled'
|
cause: str = 'cancelled'
|
||||||
msg: str = (
|
|
||||||
f'ctx {ctx.side!r}-side {cause!r} with,\n'
|
|
||||||
f'{ctx.repr_outcome()!r}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# XXX explicitly report on any non-graceful-taskc cases
|
# XXX explicitly report on any non-graceful-taskc cases
|
||||||
case _:
|
case _:
|
||||||
cause: str = 'errored'
|
cause: str = 'errored'
|
||||||
logmeth = log.exception
|
logmeth = log.exception
|
||||||
msg: str = f'ctx {ctx.side!r}-side {cause!r} with,\n'
|
|
||||||
|
|
||||||
logmeth(msg)
|
logmeth(
|
||||||
|
f'ctx {ctx.side!r}-side {cause!r} with,\n'
|
||||||
|
f'{ctx.repr_outcome()!r}\n'
|
||||||
|
)
|
||||||
|
|
||||||
if debug_mode():
|
if debug_mode():
|
||||||
# async with debug.acquire_debug_lock(portal.actor.aid.uid):
|
# async with debug.acquire_debug_lock(portal.actor.uid):
|
||||||
# pass
|
# pass
|
||||||
# TODO: factor ^ into below for non-root cases?
|
# TODO: factor ^ into below for non-root cases?
|
||||||
#
|
#
|
||||||
|
|
@ -2436,7 +2426,10 @@ async def open_context_from_portal(
|
||||||
try:
|
try:
|
||||||
# await pause(shield=True)
|
# await pause(shield=True)
|
||||||
await ctx.cancel()
|
await ctx.cancel()
|
||||||
except TransportClosed:
|
except (
|
||||||
|
trio.BrokenResourceError,
|
||||||
|
trio.ClosedResourceError,
|
||||||
|
):
|
||||||
log.warning(
|
log.warning(
|
||||||
'IPC connection for context is broken?\n'
|
'IPC connection for context is broken?\n'
|
||||||
f'task: {ctx.cid}\n'
|
f'task: {ctx.cid}\n'
|
||||||
|
|
|
||||||
|
|
@ -20,8 +20,6 @@ management of (service) actors.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
import ipaddress
|
|
||||||
import socket
|
|
||||||
from typing import (
|
from typing import (
|
||||||
AsyncGenerator,
|
AsyncGenerator,
|
||||||
AsyncContextManager,
|
AsyncContextManager,
|
||||||
|
|
@ -30,129 +28,39 @@ from typing import (
|
||||||
from contextlib import asynccontextmanager as acm
|
from contextlib import asynccontextmanager as acm
|
||||||
|
|
||||||
from tractor.log import get_logger
|
from tractor.log import get_logger
|
||||||
from ..trionics import (
|
from .trionics import (
|
||||||
gather_contexts,
|
gather_contexts,
|
||||||
collapse_eg,
|
collapse_eg,
|
||||||
)
|
)
|
||||||
from ..ipc import _connect_chan, Channel
|
from .ipc import _connect_chan, Channel
|
||||||
from ..ipc._tcp import TCPAddress
|
|
||||||
from ..ipc._uds import UDSAddress
|
|
||||||
from ._addr import (
|
from ._addr import (
|
||||||
UnwrappedAddress,
|
UnwrappedAddress,
|
||||||
Address,
|
Address,
|
||||||
wrap_address,
|
wrap_address
|
||||||
)
|
)
|
||||||
from ..runtime._portal import (
|
from ._portal import (
|
||||||
Portal,
|
Portal,
|
||||||
open_portal,
|
open_portal,
|
||||||
LocalPortal,
|
LocalPortal,
|
||||||
)
|
)
|
||||||
from ..runtime._state import (
|
from ._state import (
|
||||||
current_actor,
|
current_actor,
|
||||||
_runtime_vars,
|
_runtime_vars,
|
||||||
_def_tpt_proto,
|
_def_tpt_proto,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ..runtime._runtime import Actor
|
from ._runtime import Actor
|
||||||
|
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def _is_local_addr(addr: Address) -> bool:
|
|
||||||
'''
|
|
||||||
Determine whether `addr` is reachable on the
|
|
||||||
local host by inspecting address type and
|
|
||||||
comparing hostnames/PIDs.
|
|
||||||
|
|
||||||
- `UDSAddress` is always local (filesystem-bound)
|
|
||||||
- `TCPAddress` is local when its host is a
|
|
||||||
loopback IP or matches one of the machine's
|
|
||||||
own interface addresses.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if isinstance(addr, UDSAddress):
|
|
||||||
return True
|
|
||||||
|
|
||||||
if isinstance(addr, TCPAddress):
|
|
||||||
try:
|
|
||||||
ip = ipaddress.ip_address(addr._host)
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
if ip.is_loopback:
|
|
||||||
return True
|
|
||||||
|
|
||||||
# check if this IP belongs to any of our
|
|
||||||
# local network interfaces.
|
|
||||||
try:
|
|
||||||
local_ips: set[str] = {
|
|
||||||
info[4][0]
|
|
||||||
for info in socket.getaddrinfo(
|
|
||||||
socket.gethostname(),
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
return addr._host in local_ips
|
|
||||||
except socket.gaierror:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def prefer_addr(
|
|
||||||
addrs: list[UnwrappedAddress],
|
|
||||||
) -> UnwrappedAddress:
|
|
||||||
'''
|
|
||||||
Select the "best" transport address from a
|
|
||||||
multihomed actor's address list based on
|
|
||||||
locality heuristics.
|
|
||||||
|
|
||||||
Preference order (highest -> lowest):
|
|
||||||
1. UDS (same-host guaranteed, lowest overhead)
|
|
||||||
2. TCP loopback / same-host IP
|
|
||||||
3. TCP remote (only option for distributed)
|
|
||||||
|
|
||||||
When multiple addrs share the same priority
|
|
||||||
tier, the last-registered (latest) entry is
|
|
||||||
preferred.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if len(addrs) == 1:
|
|
||||||
return addrs[0]
|
|
||||||
|
|
||||||
local_uds: list[UnwrappedAddress] = []
|
|
||||||
local_tcp: list[UnwrappedAddress] = []
|
|
||||||
remote: list[UnwrappedAddress] = []
|
|
||||||
|
|
||||||
for unwrapped in addrs:
|
|
||||||
wrapped: Address = wrap_address(unwrapped)
|
|
||||||
if isinstance(wrapped, UDSAddress):
|
|
||||||
local_uds.append(unwrapped)
|
|
||||||
elif _is_local_addr(wrapped):
|
|
||||||
local_tcp.append(unwrapped)
|
|
||||||
else:
|
|
||||||
remote.append(unwrapped)
|
|
||||||
|
|
||||||
# prefer UDS > local TCP > remote TCP;
|
|
||||||
# within each tier take the latest entry.
|
|
||||||
if local_uds:
|
|
||||||
return local_uds[-1]
|
|
||||||
if local_tcp:
|
|
||||||
return local_tcp[-1]
|
|
||||||
if remote:
|
|
||||||
return remote[-1]
|
|
||||||
|
|
||||||
# fallback: last registered addr
|
|
||||||
return addrs[-1]
|
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def get_registry(
|
async def get_registry(
|
||||||
addr: UnwrappedAddress|None = None,
|
addr: UnwrappedAddress|None = None,
|
||||||
) -> AsyncGenerator[
|
) -> AsyncGenerator[
|
||||||
Portal|LocalPortal|None,
|
Portal | LocalPortal | None,
|
||||||
None,
|
None,
|
||||||
]:
|
]:
|
||||||
'''
|
'''
|
||||||
|
|
@ -164,8 +72,8 @@ async def get_registry(
|
||||||
'''
|
'''
|
||||||
actor: Actor = current_actor()
|
actor: Actor = current_actor()
|
||||||
if actor.is_registrar:
|
if actor.is_registrar:
|
||||||
# we're already the registrar
|
# we're already the arbiter
|
||||||
# (likely a re-entrant call from the registrar actor)
|
# (likely a re-entrant call from the arbiter actor)
|
||||||
yield LocalPortal(
|
yield LocalPortal(
|
||||||
actor,
|
actor,
|
||||||
Channel(transport=None)
|
Channel(transport=None)
|
||||||
|
|
@ -183,13 +91,10 @@ async def get_registry(
|
||||||
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def get_root(**kwargs) -> AsyncGenerator[Portal, None]:
|
async def get_root(
|
||||||
'''
|
**kwargs,
|
||||||
Deliver the current actor's "root process" actor (yes in actor
|
) -> AsyncGenerator[Portal, None]:
|
||||||
and proc tree terms) by delivering a `Portal` from the spawn-time
|
|
||||||
provided contact address.
|
|
||||||
|
|
||||||
'''
|
|
||||||
# TODO: rename mailbox to `_root_maddr` when we finally
|
# TODO: rename mailbox to `_root_maddr` when we finally
|
||||||
# add and impl libp2p multi-addrs?
|
# add and impl libp2p multi-addrs?
|
||||||
addr = _runtime_vars['_root_mailbox']
|
addr = _runtime_vars['_root_mailbox']
|
||||||
|
|
@ -245,27 +150,21 @@ async def query_actor(
|
||||||
regaddr: UnwrappedAddress|None = None,
|
regaddr: UnwrappedAddress|None = None,
|
||||||
|
|
||||||
) -> AsyncGenerator[
|
) -> AsyncGenerator[
|
||||||
tuple[UnwrappedAddress|None, Portal|LocalPortal|None],
|
UnwrappedAddress|None,
|
||||||
None,
|
None,
|
||||||
]:
|
]:
|
||||||
'''
|
'''
|
||||||
Lookup a transport address (by actor name) via querying a registrar
|
Lookup a transport address (by actor name) via querying a registrar
|
||||||
listening @ `regaddr`.
|
listening @ `regaddr`.
|
||||||
|
|
||||||
Yields a `tuple` of `(addr, reg_portal)` where,
|
Returns the transport protocol (socket) address or `None` if no
|
||||||
- `addr` is the transport protocol (socket) address or `None` if
|
entry under that name exists.
|
||||||
no entry under that name exists,
|
|
||||||
- `reg_portal` is the `Portal` (or `LocalPortal` when the
|
|
||||||
current actor is the registrar) used for the lookup (or
|
|
||||||
`None` when the peer was found locally via
|
|
||||||
`get_peer_by_name()`).
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
actor: Actor = current_actor()
|
actor: Actor = current_actor()
|
||||||
if (
|
if (
|
||||||
name == 'registrar'
|
name == 'registrar'
|
||||||
and
|
and actor.is_registrar
|
||||||
actor.is_registrar
|
|
||||||
):
|
):
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
'The current actor IS the registry!?'
|
'The current actor IS the registry!?'
|
||||||
|
|
@ -273,80 +172,38 @@ async def query_actor(
|
||||||
|
|
||||||
maybe_peers: list[Channel]|None = get_peer_by_name(name)
|
maybe_peers: list[Channel]|None = get_peer_by_name(name)
|
||||||
if maybe_peers:
|
if maybe_peers:
|
||||||
yield maybe_peers[0].raddr, None
|
yield maybe_peers[0].raddr
|
||||||
return
|
return
|
||||||
|
|
||||||
reg_portal: Portal|LocalPortal
|
reg_portal: Portal
|
||||||
regaddr: Address = wrap_address(regaddr) or actor.reg_addrs[0]
|
regaddr: Address = wrap_address(regaddr) or actor.reg_addrs[0]
|
||||||
async with get_registry(regaddr) as reg_portal:
|
async with get_registry(regaddr) as reg_portal:
|
||||||
addrs: list[UnwrappedAddress]|None = (
|
# TODO: return portals to all available actors - for now
|
||||||
await reg_portal.run_from_ns(
|
# just the last one that registered
|
||||||
|
addr: UnwrappedAddress = await reg_portal.run_from_ns(
|
||||||
'self',
|
'self',
|
||||||
'find_actor',
|
'find_actor',
|
||||||
name=name,
|
name=name,
|
||||||
)
|
)
|
||||||
)
|
yield addr
|
||||||
if addrs:
|
|
||||||
addr: UnwrappedAddress = prefer_addr(addrs)
|
|
||||||
else:
|
|
||||||
addr = None
|
|
||||||
yield addr, reg_portal
|
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
async def maybe_open_portal(
|
async def maybe_open_portal(
|
||||||
addr: UnwrappedAddress,
|
addr: UnwrappedAddress,
|
||||||
name: str,
|
name: str,
|
||||||
):
|
):
|
||||||
'''
|
|
||||||
Open a `Portal` to the actor serving @ `addr` or `None` if no
|
|
||||||
peer can be contacted or found.
|
|
||||||
|
|
||||||
'''
|
|
||||||
async with query_actor(
|
async with query_actor(
|
||||||
name=name,
|
name=name,
|
||||||
regaddr=addr,
|
regaddr=addr,
|
||||||
) as (addr, reg_portal):
|
) as addr:
|
||||||
if not addr:
|
pass
|
||||||
yield None
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
if addr:
|
||||||
async with _connect_chan(addr) as chan:
|
async with _connect_chan(addr) as chan:
|
||||||
async with open_portal(chan) as portal:
|
async with open_portal(chan) as portal:
|
||||||
yield portal
|
yield portal
|
||||||
|
|
||||||
# most likely we were unable to connect the
|
|
||||||
# transport and there is likely a stale entry in
|
|
||||||
# the registry actor's table, thus we need to
|
|
||||||
# instruct it to clear that stale entry and then
|
|
||||||
# more silently (pretend there was no reason but
|
|
||||||
# to) indicate that the target actor can't be
|
|
||||||
# contacted at that addr.
|
|
||||||
except OSError:
|
|
||||||
# NOTE: ensure we delete the stale entry
|
|
||||||
# from the registrar actor when available.
|
|
||||||
if reg_portal is not None:
|
|
||||||
uid: tuple[str, str]|None = await reg_portal.run_from_ns(
|
|
||||||
'self',
|
|
||||||
'delete_addr',
|
|
||||||
addr=addr,
|
|
||||||
)
|
|
||||||
if uid:
|
|
||||||
log.warning(
|
|
||||||
f'Deleted stale registry entry !\n'
|
|
||||||
f'addr: {addr!r}\n'
|
|
||||||
f'uid: {uid!r}\n'
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
log.warning(
|
|
||||||
f'No registry entry found for addr: {addr!r}'
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
log.warning(
|
|
||||||
f'Connection to {addr!r} failed'
|
|
||||||
f' and no registry portal available'
|
|
||||||
f' to delete stale entry.'
|
|
||||||
)
|
|
||||||
yield None
|
yield None
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -364,10 +221,10 @@ async def find_actor(
|
||||||
None,
|
None,
|
||||||
]:
|
]:
|
||||||
'''
|
'''
|
||||||
Ask the registrar to find actor(s) by name.
|
Ask the arbiter to find actor(s) by name.
|
||||||
|
|
||||||
Returns a connected portal to the last registered
|
Returns a connected portal to the last registered matching actor
|
||||||
matching actor known to the registrar.
|
known to the arbiter.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# optimization path, use any pre-existing peer channel
|
# optimization path, use any pre-existing peer channel
|
||||||
|
|
@ -415,7 +272,7 @@ async def find_actor(
|
||||||
if not any(portals):
|
if not any(portals):
|
||||||
if raise_on_none:
|
if raise_on_none:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f'No actor {name!r} found registered @ {registry_addrs!r}'
|
f'No actor "{name}" found registered @ {registry_addrs}'
|
||||||
)
|
)
|
||||||
yield None
|
yield None
|
||||||
return
|
return
|
||||||
|
|
@ -466,9 +323,9 @@ async def wait_for_actor(
|
||||||
name=name,
|
name=name,
|
||||||
)
|
)
|
||||||
|
|
||||||
# select the best transport addr from
|
# get latest registered addr by default?
|
||||||
# the (possibly multihomed) addr list.
|
# TODO: offer multi-portal yields in multi-homed case?
|
||||||
addr: UnwrappedAddress = prefer_addr(addrs)
|
addr: UnwrappedAddress = addrs[-1]
|
||||||
|
|
||||||
async with _connect_chan(addr) as chan:
|
async with _connect_chan(addr) as chan:
|
||||||
async with open_portal(chan) as portal:
|
async with open_portal(chan) as portal:
|
||||||
|
|
@ -21,6 +21,7 @@ Sub-process entry points.
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import multiprocessing as mp
|
import multiprocessing as mp
|
||||||
|
# import os
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
|
|
@ -28,19 +29,19 @@ from typing import (
|
||||||
|
|
||||||
import trio # type: ignore
|
import trio # type: ignore
|
||||||
|
|
||||||
from ..log import (
|
from .log import (
|
||||||
get_console_log,
|
get_console_log,
|
||||||
get_logger,
|
get_logger,
|
||||||
)
|
)
|
||||||
from ..runtime import _state
|
from . import _state
|
||||||
from ..devx import (
|
from .devx import (
|
||||||
_frame_stack,
|
_frame_stack,
|
||||||
pformat,
|
pformat,
|
||||||
)
|
)
|
||||||
# from ..msg import pretty_struct
|
# from .msg import pretty_struct
|
||||||
from ..to_asyncio import run_as_asyncio_guest
|
from .to_asyncio import run_as_asyncio_guest
|
||||||
from ..discovery._addr import UnwrappedAddress
|
from ._addr import UnwrappedAddress
|
||||||
from ..runtime._runtime import (
|
from ._runtime import (
|
||||||
async_main,
|
async_main,
|
||||||
Actor,
|
Actor,
|
||||||
)
|
)
|
||||||
|
|
@ -49,7 +50,7 @@ if TYPE_CHECKING:
|
||||||
from ._spawn import SpawnMethodKey
|
from ._spawn import SpawnMethodKey
|
||||||
|
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def _mp_main(
|
def _mp_main(
|
||||||
|
|
@ -71,15 +72,11 @@ def _mp_main(
|
||||||
spawn_ctx: mp.context.BaseContext = try_set_start_method(start_method)
|
spawn_ctx: mp.context.BaseContext = try_set_start_method(start_method)
|
||||||
assert spawn_ctx
|
assert spawn_ctx
|
||||||
|
|
||||||
# XXX, enable root log at level
|
|
||||||
if actor.loglevel is not None:
|
if actor.loglevel is not None:
|
||||||
log.info(
|
log.info(
|
||||||
f'Setting loglevel for {actor.uid} to {actor.loglevel!r}'
|
f'Setting loglevel for {actor.uid} to {actor.loglevel}'
|
||||||
)
|
|
||||||
get_console_log(
|
|
||||||
level=actor.loglevel,
|
|
||||||
name='tractor',
|
|
||||||
)
|
)
|
||||||
|
get_console_log(actor.loglevel)
|
||||||
|
|
||||||
# TODO: use scops headers like for `trio` below!
|
# TODO: use scops headers like for `trio` below!
|
||||||
# (well after we libify it maybe..)
|
# (well after we libify it maybe..)
|
||||||
|
|
@ -129,12 +126,8 @@ def _trio_main(
|
||||||
parent_addr=parent_addr
|
parent_addr=parent_addr
|
||||||
)
|
)
|
||||||
|
|
||||||
# XXX, enable root log at level
|
|
||||||
if actor.loglevel is not None:
|
if actor.loglevel is not None:
|
||||||
get_console_log(
|
get_console_log(actor.loglevel)
|
||||||
level=actor.loglevel,
|
|
||||||
name='tractor',
|
|
||||||
)
|
|
||||||
log.info(
|
log.info(
|
||||||
f'Starting `trio` subactor from parent @ '
|
f'Starting `trio` subactor from parent @ '
|
||||||
f'{parent_addr}\n'
|
f'{parent_addr}\n'
|
||||||
|
|
@ -43,7 +43,7 @@ from msgspec import (
|
||||||
ValidationError,
|
ValidationError,
|
||||||
)
|
)
|
||||||
|
|
||||||
from tractor.runtime._state import current_actor
|
from tractor._state import current_actor
|
||||||
from tractor.log import get_logger
|
from tractor.log import get_logger
|
||||||
from tractor.msg import (
|
from tractor.msg import (
|
||||||
Error,
|
Error,
|
||||||
|
|
@ -187,31 +187,7 @@ _body_fields: list[str] = list(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def reg_err_types(
|
def get_err_type(type_name: str) -> BaseException|None:
|
||||||
exc_types: list[Type[Exception]],
|
|
||||||
) -> None:
|
|
||||||
'''
|
|
||||||
Register custom exception types for local lookup.
|
|
||||||
|
|
||||||
Such that error types can be registered by an external
|
|
||||||
`tractor`-use-app code base which are expected to be raised
|
|
||||||
remotely; enables them being re-raised on the receiver side of
|
|
||||||
some inter-actor IPC dialog.
|
|
||||||
|
|
||||||
'''
|
|
||||||
for exc_type in exc_types:
|
|
||||||
log.debug(
|
|
||||||
f'Register custom exception,\n'
|
|
||||||
f'{exc_type!r}\n'
|
|
||||||
)
|
|
||||||
setattr(
|
|
||||||
_this_mod,
|
|
||||||
exc_type.__name__,
|
|
||||||
exc_type,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_err_type(type_name: str) -> Type[BaseException]|None:
|
|
||||||
'''
|
'''
|
||||||
Look up an exception type by name from the set of locally known
|
Look up an exception type by name from the set of locally known
|
||||||
namespaces:
|
namespaces:
|
||||||
|
|
@ -325,8 +301,7 @@ class RemoteActorError(Exception):
|
||||||
# also pertains to our long long oustanding issue XD
|
# also pertains to our long long oustanding issue XD
|
||||||
# https://github.com/goodboy/tractor/issues/5
|
# https://github.com/goodboy/tractor/issues/5
|
||||||
self._boxed_type: BaseException = boxed_type
|
self._boxed_type: BaseException = boxed_type
|
||||||
self._src_type: Type[BaseException]|None = None
|
self._src_type: BaseException|None = None
|
||||||
self._src_type_resolved: bool = False
|
|
||||||
self._ipc_msg: Error|None = ipc_msg
|
self._ipc_msg: Error|None = ipc_msg
|
||||||
self._extra_msgdata = extra_msgdata
|
self._extra_msgdata = extra_msgdata
|
||||||
|
|
||||||
|
|
@ -435,41 +410,24 @@ class RemoteActorError(Exception):
|
||||||
return self._ipc_msg.src_type_str
|
return self._ipc_msg.src_type_str
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def src_type(self) -> Type[BaseException]|None:
|
def src_type(self) -> str:
|
||||||
'''
|
'''
|
||||||
Error type raised by original remote faulting
|
Error type raised by original remote faulting actor.
|
||||||
actor.
|
|
||||||
|
|
||||||
When the error has only been relayed a single
|
When the error has only been relayed a single actor-hop
|
||||||
actor-hop this will be the same as
|
this will be the same as the `.boxed_type`.
|
||||||
`.boxed_type`.
|
|
||||||
|
|
||||||
If the type can not be resolved locally (i.e.
|
|
||||||
it was not registered via `reg_err_types()`)
|
|
||||||
a warning is logged and `None` is returned;
|
|
||||||
all string-level error info (`.src_type_str`,
|
|
||||||
`.tb_str`, etc.) remains available.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
if not self._src_type_resolved:
|
if self._src_type is None:
|
||||||
self._src_type_resolved = True
|
|
||||||
|
|
||||||
if self._ipc_msg is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
self._src_type = get_err_type(
|
self._src_type = get_err_type(
|
||||||
self._ipc_msg.src_type_str
|
self._ipc_msg.src_type_str
|
||||||
)
|
)
|
||||||
|
|
||||||
if not self._src_type:
|
if not self._src_type:
|
||||||
log.warning(
|
raise TypeError(
|
||||||
f'Failed to lookup src error type via\n'
|
f'Failed to lookup src error type with '
|
||||||
f'`tractor._exceptions.get_err_type()`:\n'
|
f'`tractor._exceptions.get_err_type()` :\n'
|
||||||
f'\n'
|
f'{self.src_type_str}'
|
||||||
f'`{self._ipc_msg.src_type_str}`'
|
|
||||||
f' is not registered!\n'
|
|
||||||
f'\n'
|
|
||||||
f'Call `reg_err_types()` to enable'
|
|
||||||
f' full type reconstruction.\n'
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return self._src_type
|
return self._src_type
|
||||||
|
|
@ -477,30 +435,20 @@ class RemoteActorError(Exception):
|
||||||
@property
|
@property
|
||||||
def boxed_type_str(self) -> str:
|
def boxed_type_str(self) -> str:
|
||||||
'''
|
'''
|
||||||
String-name of the (last hop's) boxed error
|
String-name of the (last hop's) boxed error type.
|
||||||
type.
|
|
||||||
|
|
||||||
Falls back to the IPC-msg-encoded type-name
|
|
||||||
str when the type can not be resolved locally
|
|
||||||
(e.g. unregistered custom errors).
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# TODO, maybe support also serializing the
|
# TODO, maybe support also serializing the
|
||||||
# `ExceptionGroup.exceptions: list[BaseException]`
|
# `ExceptionGroup.exeptions: list[BaseException]` set under
|
||||||
# set under certain conditions?
|
# certain conditions?
|
||||||
bt: Type[BaseException] = self.boxed_type
|
bt: Type[BaseException] = self.boxed_type
|
||||||
if bt:
|
if bt:
|
||||||
return str(bt.__name__)
|
return str(bt.__name__)
|
||||||
|
|
||||||
# fallback to the str name from the IPC msg
|
return ''
|
||||||
# when the type obj can't be resolved.
|
|
||||||
if self._ipc_msg:
|
|
||||||
return self._ipc_msg.boxed_type_str
|
|
||||||
|
|
||||||
return '<unknown>'
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def boxed_type(self) -> Type[BaseException]|None:
|
def boxed_type(self) -> Type[BaseException]:
|
||||||
'''
|
'''
|
||||||
Error type boxed by last actor IPC hop.
|
Error type boxed by last actor IPC hop.
|
||||||
|
|
||||||
|
|
@ -729,22 +677,10 @@ class RemoteActorError(Exception):
|
||||||
failing actor's remote env.
|
failing actor's remote env.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# TODO: better tb insertion and all the fancier
|
# TODO: better tb insertion and all the fancier dunder
|
||||||
# dunder metadata stuff as per `.__context__`
|
# metadata stuff as per `.__context__` etc. and friends:
|
||||||
# etc. and friends:
|
|
||||||
# https://github.com/python-trio/trio/issues/611
|
# https://github.com/python-trio/trio/issues/611
|
||||||
src_type_ref: Type[BaseException]|None = (
|
src_type_ref: Type[BaseException] = self.src_type
|
||||||
self.src_type
|
|
||||||
)
|
|
||||||
if src_type_ref is None:
|
|
||||||
# unresolvable type: fall back to
|
|
||||||
# a `RuntimeError` preserving original
|
|
||||||
# traceback + type name.
|
|
||||||
return RuntimeError(
|
|
||||||
f'{self.src_type_str}: '
|
|
||||||
f'{self.tb_str}'
|
|
||||||
)
|
|
||||||
|
|
||||||
return src_type_ref(self.tb_str)
|
return src_type_ref(self.tb_str)
|
||||||
|
|
||||||
# TODO: local recontruction of nested inception for a given
|
# TODO: local recontruction of nested inception for a given
|
||||||
|
|
@ -1046,7 +982,6 @@ class TransportClosed(Exception):
|
||||||
'''
|
'''
|
||||||
__tracebackhide__: bool = hide_tb
|
__tracebackhide__: bool = hide_tb
|
||||||
message: str = message or self.message
|
message: str = message or self.message
|
||||||
|
|
||||||
# when a cause is set, slap it onto the log emission.
|
# when a cause is set, slap it onto the log emission.
|
||||||
if cause := self.src_exc:
|
if cause := self.src_exc:
|
||||||
cause_tb_str: str = ''.join(
|
cause_tb_str: str = ''.join(
|
||||||
|
|
@ -1054,7 +989,7 @@ class TransportClosed(Exception):
|
||||||
)
|
)
|
||||||
message += (
|
message += (
|
||||||
f'{cause_tb_str}\n' # tb
|
f'{cause_tb_str}\n' # tb
|
||||||
f'{cause!r}\n' # exc repr
|
f' {cause}\n' # exc repr
|
||||||
)
|
)
|
||||||
|
|
||||||
getattr(
|
getattr(
|
||||||
|
|
@ -1273,31 +1208,14 @@ def unpack_error(
|
||||||
if not isinstance(msg, Error):
|
if not isinstance(msg, Error):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# try to lookup a suitable error type from the
|
# try to lookup a suitable error type from the local runtime
|
||||||
# local runtime env then use it to construct a
|
# env then use it to construct a local instance.
|
||||||
# local instance.
|
# boxed_type_str: str = error_dict['boxed_type_str']
|
||||||
boxed_type_str: str = msg.boxed_type_str
|
boxed_type_str: str = msg.boxed_type_str
|
||||||
boxed_type: Type[BaseException]|None = get_err_type(
|
boxed_type: Type[BaseException] = get_err_type(boxed_type_str)
|
||||||
boxed_type_str
|
|
||||||
)
|
|
||||||
|
|
||||||
if boxed_type is None:
|
# retrieve the error's msg-encoded remotoe-env info
|
||||||
log.warning(
|
message: str = f'remote task raised a {msg.boxed_type_str!r}\n'
|
||||||
f'Failed to resolve remote error type\n'
|
|
||||||
f'`{boxed_type_str}` - boxing as\n'
|
|
||||||
f'`RemoteActorError` with original\n'
|
|
||||||
f'traceback preserved.\n'
|
|
||||||
f'\n'
|
|
||||||
f'Call `reg_err_types()` to enable\n'
|
|
||||||
f'full type reconstruction.\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# retrieve the error's msg-encoded remote-env
|
|
||||||
# info
|
|
||||||
message: str = (
|
|
||||||
f'remote task raised a '
|
|
||||||
f'{msg.boxed_type_str!r}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: do we even really need these checks for RAEs?
|
# TODO: do we even really need these checks for RAEs?
|
||||||
if boxed_type_str in [
|
if boxed_type_str in [
|
||||||
|
|
|
||||||
|
|
@ -125,7 +125,7 @@ class PatchedForkServer(ForkServer):
|
||||||
self._forkserver_pid = None
|
self._forkserver_pid = None
|
||||||
|
|
||||||
# XXX only thing that changed!
|
# XXX only thing that changed!
|
||||||
cmd = ('from tractor.spawn._forkserver_override import main; ' +
|
cmd = ('from tractor._forkserver_override import main; ' +
|
||||||
'main(%d, %d, %r, **%r)')
|
'main(%d, %d, %r, **%r)')
|
||||||
|
|
||||||
if self._preload_modules:
|
if self._preload_modules:
|
||||||
|
|
@ -0,0 +1,116 @@
|
||||||
|
# tractor: structured concurrent "actors".
|
||||||
|
# Copyright 2018-eternity Tyler Goodlet.
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Helpers pulled mostly verbatim from ``multiprocessing.spawn``
|
||||||
|
to aid with "fixing up" the ``__main__`` module in subprocesses.
|
||||||
|
|
||||||
|
These helpers are needed for any spawing backend that doesn't already
|
||||||
|
handle this. For example when using ``trio_run_in_process`` it is needed
|
||||||
|
but obviously not when we're already using ``multiprocessing``.
|
||||||
|
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import platform
|
||||||
|
import types
|
||||||
|
import runpy
|
||||||
|
|
||||||
|
|
||||||
|
ORIGINAL_DIR = os.path.abspath(os.getcwd())
|
||||||
|
|
||||||
|
|
||||||
|
def _mp_figure_out_main() -> dict[str, str]:
|
||||||
|
"""Taken from ``multiprocessing.spawn.get_preparation_data()``.
|
||||||
|
|
||||||
|
Retrieve parent actor `__main__` module data.
|
||||||
|
"""
|
||||||
|
d = {}
|
||||||
|
# Figure out whether to initialise main in the subprocess as a module
|
||||||
|
# or through direct execution (or to leave it alone entirely)
|
||||||
|
main_module = sys.modules['__main__']
|
||||||
|
main_mod_name = getattr(main_module.__spec__, "name", None)
|
||||||
|
if main_mod_name is not None:
|
||||||
|
d['init_main_from_name'] = main_mod_name
|
||||||
|
# elif sys.platform != 'win32' or (not WINEXE and not WINSERVICE):
|
||||||
|
elif platform.system() != 'Windows':
|
||||||
|
main_path = getattr(main_module, '__file__', None)
|
||||||
|
if main_path is not None:
|
||||||
|
if (
|
||||||
|
not os.path.isabs(main_path) and (
|
||||||
|
ORIGINAL_DIR is not None)
|
||||||
|
):
|
||||||
|
# process.ORIGINAL_DIR is not None):
|
||||||
|
# main_path = os.path.join(process.ORIGINAL_DIR, main_path)
|
||||||
|
main_path = os.path.join(ORIGINAL_DIR, main_path)
|
||||||
|
d['init_main_from_path'] = os.path.normpath(main_path)
|
||||||
|
|
||||||
|
return d
|
||||||
|
|
||||||
|
|
||||||
|
# Multiprocessing module helpers to fix up the main module in
|
||||||
|
# spawned subprocesses
|
||||||
|
def _fixup_main_from_name(mod_name: str) -> None:
|
||||||
|
# __main__.py files for packages, directories, zip archives, etc, run
|
||||||
|
# their "main only" code unconditionally, so we don't even try to
|
||||||
|
# populate anything in __main__, nor do we make any changes to
|
||||||
|
# __main__ attributes
|
||||||
|
current_main = sys.modules['__main__']
|
||||||
|
if mod_name == "__main__" or mod_name.endswith(".__main__"):
|
||||||
|
return
|
||||||
|
|
||||||
|
# If this process was forked, __main__ may already be populated
|
||||||
|
if getattr(current_main.__spec__, "name", None) == mod_name:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Otherwise, __main__ may contain some non-main code where we need to
|
||||||
|
# support unpickling it properly. We rerun it as __mp_main__ and make
|
||||||
|
# the normal __main__ an alias to that
|
||||||
|
# old_main_modules.append(current_main)
|
||||||
|
main_module = types.ModuleType("__mp_main__")
|
||||||
|
main_content = runpy.run_module(mod_name,
|
||||||
|
run_name="__mp_main__",
|
||||||
|
alter_sys=True) # type: ignore
|
||||||
|
main_module.__dict__.update(main_content)
|
||||||
|
sys.modules['__main__'] = sys.modules['__mp_main__'] = main_module
|
||||||
|
|
||||||
|
|
||||||
|
def _fixup_main_from_path(main_path: str) -> None:
|
||||||
|
# If this process was forked, __main__ may already be populated
|
||||||
|
current_main = sys.modules['__main__']
|
||||||
|
|
||||||
|
# Unfortunately, the main ipython launch script historically had no
|
||||||
|
# "if __name__ == '__main__'" guard, so we work around that
|
||||||
|
# by treating it like a __main__.py file
|
||||||
|
# See https://github.com/ipython/ipython/issues/4698
|
||||||
|
main_name = os.path.splitext(os.path.basename(main_path))[0]
|
||||||
|
if main_name == 'ipython':
|
||||||
|
return
|
||||||
|
|
||||||
|
# Otherwise, if __file__ already has the setting we expect,
|
||||||
|
# there's nothing more to do
|
||||||
|
if getattr(current_main, '__file__', None) == main_path:
|
||||||
|
return
|
||||||
|
|
||||||
|
# If the parent process has sent a path through rather than a module
|
||||||
|
# name we assume it is an executable script that may contain
|
||||||
|
# non-main code that needs to be executed
|
||||||
|
# old_main_modules.append(current_main)
|
||||||
|
main_module = types.ModuleType("__mp_main__")
|
||||||
|
main_content = runpy.run_path(main_path,
|
||||||
|
run_name="__mp_main__") # type: ignore
|
||||||
|
main_module.__dict__.update(main_content)
|
||||||
|
sys.modules['__main__'] = sys.modules['__mp_main__'] = main_module
|
||||||
|
|
@ -0,0 +1,151 @@
|
||||||
|
# tractor: structured concurrent "actors".
|
||||||
|
# Copyright 2018-eternity Tyler Goodlet.
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
'''
|
||||||
|
Multiaddress parser and utils according the spec(s) defined by
|
||||||
|
`libp2p` and used in dependent project such as `ipfs`:
|
||||||
|
|
||||||
|
- https://docs.libp2p.io/concepts/fundamentals/addressing/
|
||||||
|
- https://github.com/libp2p/specs/blob/master/addressing/README.md
|
||||||
|
|
||||||
|
'''
|
||||||
|
from typing import Iterator
|
||||||
|
|
||||||
|
from bidict import bidict
|
||||||
|
|
||||||
|
# TODO: see if we can leverage libp2p ecosys projects instead of
|
||||||
|
# rolling our own (parser) impls of the above addressing specs:
|
||||||
|
# - https://github.com/libp2p/py-libp2p
|
||||||
|
# - https://docs.libp2p.io/concepts/nat/circuit-relay/#relay-addresses
|
||||||
|
# prots: bidict[int, str] = bidict({
|
||||||
|
prots: bidict[int, str] = {
|
||||||
|
'ipv4': 3,
|
||||||
|
'ipv6': 3,
|
||||||
|
'wg': 3,
|
||||||
|
|
||||||
|
'tcp': 4,
|
||||||
|
'udp': 4,
|
||||||
|
|
||||||
|
# TODO: support the next-gen shite Bo
|
||||||
|
# 'quic': 4,
|
||||||
|
# 'ssh': 7, # via rsyscall bootstrapping
|
||||||
|
}
|
||||||
|
|
||||||
|
prot_params: dict[str, tuple[str]] = {
|
||||||
|
'ipv4': ('addr',),
|
||||||
|
'ipv6': ('addr',),
|
||||||
|
'wg': ('addr', 'port', 'pubkey'),
|
||||||
|
|
||||||
|
'tcp': ('port',),
|
||||||
|
'udp': ('port',),
|
||||||
|
|
||||||
|
# 'quic': ('port',),
|
||||||
|
# 'ssh': ('port',),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def iter_prot_layers(
|
||||||
|
multiaddr: str,
|
||||||
|
) -> Iterator[
|
||||||
|
tuple[
|
||||||
|
int,
|
||||||
|
list[str]
|
||||||
|
]
|
||||||
|
]:
|
||||||
|
'''
|
||||||
|
Unpack a libp2p style "multiaddress" into multiple "segments"
|
||||||
|
for each "layer" of the protocoll stack (in OSI terms).
|
||||||
|
|
||||||
|
'''
|
||||||
|
tokens: list[str] = multiaddr.split('/')
|
||||||
|
root, tokens = tokens[0], tokens[1:]
|
||||||
|
assert not root # there is a root '/' on LHS
|
||||||
|
itokens = iter(tokens)
|
||||||
|
|
||||||
|
prot: str | None = None
|
||||||
|
params: list[str] = []
|
||||||
|
for token in itokens:
|
||||||
|
# every prot path should start with a known
|
||||||
|
# key-str.
|
||||||
|
if token in prots:
|
||||||
|
if prot is None:
|
||||||
|
prot: str = token
|
||||||
|
else:
|
||||||
|
yield prot, params
|
||||||
|
prot = token
|
||||||
|
|
||||||
|
params = []
|
||||||
|
|
||||||
|
elif token not in prots:
|
||||||
|
params.append(token)
|
||||||
|
|
||||||
|
else:
|
||||||
|
yield prot, params
|
||||||
|
|
||||||
|
|
||||||
|
def parse_maddr(
|
||||||
|
multiaddr: str,
|
||||||
|
) -> dict[str, str | int | dict]:
|
||||||
|
'''
|
||||||
|
Parse a libp2p style "multiaddress" into its distinct protocol
|
||||||
|
segments where each segment is of the form:
|
||||||
|
|
||||||
|
`../<protocol>/<param0>/<param1>/../<paramN>`
|
||||||
|
|
||||||
|
and is loaded into a (order preserving) `layers: dict[str,
|
||||||
|
dict[str, Any]` which holds each protocol-layer-segment of the
|
||||||
|
original `str` path as a separate entry according to its approx
|
||||||
|
OSI "layer number".
|
||||||
|
|
||||||
|
Any `paramN` in the path must be distinctly defined by a str-token in the
|
||||||
|
(module global) `prot_params` table.
|
||||||
|
|
||||||
|
For eg. for wireguard which requires an address, port number and publickey
|
||||||
|
the protocol params are specified as the entry:
|
||||||
|
|
||||||
|
'wg': ('addr', 'port', 'pubkey'),
|
||||||
|
|
||||||
|
and are thus parsed from a maddr in that order:
|
||||||
|
`'/wg/1.1.1.1/51820/<pubkey>'`
|
||||||
|
|
||||||
|
'''
|
||||||
|
layers: dict[str, str | int | dict] = {}
|
||||||
|
for (
|
||||||
|
prot_key,
|
||||||
|
params,
|
||||||
|
) in iter_prot_layers(multiaddr):
|
||||||
|
|
||||||
|
layer: int = prots[prot_key] # OSI layer used for sorting
|
||||||
|
ep: dict[str, int | str] = {'layer': layer}
|
||||||
|
layers[prot_key] = ep
|
||||||
|
|
||||||
|
# TODO; validation and resolving of names:
|
||||||
|
# - each param via a validator provided as part of the
|
||||||
|
# prot_params def? (also see `"port"` case below..)
|
||||||
|
# - do a resolv step that will check addrs against
|
||||||
|
# any loaded network.resolv: dict[str, str]
|
||||||
|
rparams: list = list(reversed(params))
|
||||||
|
for key in prot_params[prot_key]:
|
||||||
|
val: str | int = rparams.pop()
|
||||||
|
|
||||||
|
# TODO: UGHH, dunno what we should do for validation
|
||||||
|
# here, put it in the params spec somehow?
|
||||||
|
if key == 'port':
|
||||||
|
val = int(val)
|
||||||
|
|
||||||
|
ep[key] = val
|
||||||
|
|
||||||
|
return layers
|
||||||
|
|
@ -39,37 +39,37 @@ import warnings
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
from ..trionics import (
|
from .trionics import (
|
||||||
maybe_open_nursery,
|
maybe_open_nursery,
|
||||||
collapse_eg,
|
collapse_eg,
|
||||||
)
|
)
|
||||||
from ._state import (
|
from ._state import (
|
||||||
current_actor,
|
current_actor,
|
||||||
)
|
)
|
||||||
from ..ipc import Channel
|
from .ipc import Channel
|
||||||
from ..log import get_logger
|
from .log import get_logger
|
||||||
from ..msg import (
|
from .msg import (
|
||||||
# Error,
|
# Error,
|
||||||
PayloadMsg,
|
PayloadMsg,
|
||||||
NamespacePath,
|
NamespacePath,
|
||||||
Return,
|
Return,
|
||||||
)
|
)
|
||||||
from .._exceptions import (
|
from ._exceptions import (
|
||||||
NoResult,
|
NoResult,
|
||||||
TransportClosed,
|
TransportClosed,
|
||||||
)
|
)
|
||||||
from .._context import (
|
from ._context import (
|
||||||
Context,
|
Context,
|
||||||
open_context_from_portal,
|
open_context_from_portal,
|
||||||
)
|
)
|
||||||
from .._streaming import (
|
from ._streaming import (
|
||||||
MsgStream,
|
MsgStream,
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ._runtime import Actor
|
from ._runtime import Actor
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class Portal:
|
class Portal:
|
||||||
|
|
@ -329,7 +329,18 @@ class Portal:
|
||||||
# if we get here some weird cancellation case happened
|
# if we get here some weird cancellation case happened
|
||||||
return False
|
return False
|
||||||
|
|
||||||
except TransportClosed as tpt_err:
|
except (
|
||||||
|
# XXX, should never really get raised unless we aren't
|
||||||
|
# wrapping them in the below type by mistake?
|
||||||
|
#
|
||||||
|
# Leaving the catch here for now until we're very sure
|
||||||
|
# all the cases (for various tpt protos) have indeed been
|
||||||
|
# re-wrapped ;p
|
||||||
|
trio.ClosedResourceError,
|
||||||
|
trio.BrokenResourceError,
|
||||||
|
|
||||||
|
TransportClosed,
|
||||||
|
) as tpt_err:
|
||||||
ipc_borked_report: str = (
|
ipc_borked_report: str = (
|
||||||
f'IPC for actor already closed/broken?\n\n'
|
f'IPC for actor already closed/broken?\n\n'
|
||||||
f'\n'
|
f'\n'
|
||||||
139
tractor/_root.py
139
tractor/_root.py
|
|
@ -37,20 +37,19 @@ import warnings
|
||||||
|
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
from .runtime import _runtime
|
from . import _runtime
|
||||||
from .discovery._registry import Registrar
|
|
||||||
from .devx import (
|
from .devx import (
|
||||||
debug,
|
debug,
|
||||||
_frame_stack,
|
_frame_stack,
|
||||||
pformat as _pformat,
|
pformat as _pformat,
|
||||||
)
|
)
|
||||||
from .spawn import _spawn
|
from . import _spawn
|
||||||
from .runtime import _state
|
from . import _state
|
||||||
from . import log
|
from . import log
|
||||||
from .ipc import (
|
from .ipc import (
|
||||||
_connect_chan,
|
_connect_chan,
|
||||||
)
|
)
|
||||||
from .discovery._addr import (
|
from ._addr import (
|
||||||
Address,
|
Address,
|
||||||
UnwrappedAddress,
|
UnwrappedAddress,
|
||||||
default_lo_addrs,
|
default_lo_addrs,
|
||||||
|
|
@ -89,8 +88,7 @@ async def maybe_block_bp(
|
||||||
bp_blocked: bool
|
bp_blocked: bool
|
||||||
if (
|
if (
|
||||||
debug_mode
|
debug_mode
|
||||||
and
|
and maybe_enable_greenback
|
||||||
maybe_enable_greenback
|
|
||||||
and (
|
and (
|
||||||
maybe_mod := await debug.maybe_init_greenback(
|
maybe_mod := await debug.maybe_init_greenback(
|
||||||
raise_not_found=False,
|
raise_not_found=False,
|
||||||
|
|
@ -145,16 +143,11 @@ async def maybe_block_bp(
|
||||||
@acm
|
@acm
|
||||||
async def open_root_actor(
|
async def open_root_actor(
|
||||||
*,
|
*,
|
||||||
tpt_bind_addrs: list[
|
# defaults are above
|
||||||
Address # `Address.get_random()` case
|
registry_addrs: list[UnwrappedAddress]|None = None,
|
||||||
|UnwrappedAddress # registrar case `= uw_reg_addrs`
|
|
||||||
]|None = None,
|
|
||||||
|
|
||||||
# defaults are above
|
# defaults are above
|
||||||
registry_addrs: list[
|
arbiter_addr: tuple[UnwrappedAddress]|None = None,
|
||||||
Address
|
|
||||||
|UnwrappedAddress
|
|
||||||
]|None = None,
|
|
||||||
|
|
||||||
enable_transports: list[
|
enable_transports: list[
|
||||||
# TODO, this should eventually be the pairs as
|
# TODO, this should eventually be the pairs as
|
||||||
|
|
@ -273,7 +266,16 @@ async def open_root_actor(
|
||||||
if start_method is not None:
|
if start_method is not None:
|
||||||
_spawn.try_set_start_method(start_method)
|
_spawn.try_set_start_method(start_method)
|
||||||
|
|
||||||
# XXX expect pre-unwrapped registrar addrs.
|
# TODO! remove this ASAP!
|
||||||
|
if arbiter_addr is not None:
|
||||||
|
warnings.warn(
|
||||||
|
'`arbiter_addr` is now deprecated\n'
|
||||||
|
'Use `registry_addrs: list[tuple]` instead..',
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
uw_reg_addrs = [arbiter_addr]
|
||||||
|
|
||||||
uw_reg_addrs = registry_addrs
|
uw_reg_addrs = registry_addrs
|
||||||
if not uw_reg_addrs:
|
if not uw_reg_addrs:
|
||||||
uw_reg_addrs: list[UnwrappedAddress] = default_lo_addrs(
|
uw_reg_addrs: list[UnwrappedAddress] = default_lo_addrs(
|
||||||
|
|
@ -286,12 +288,11 @@ async def open_root_actor(
|
||||||
wrap_address(uw_addr)
|
wrap_address(uw_addr)
|
||||||
for uw_addr in uw_reg_addrs
|
for uw_addr in uw_reg_addrs
|
||||||
]
|
]
|
||||||
loglevel: str = (
|
|
||||||
|
loglevel = (
|
||||||
loglevel
|
loglevel
|
||||||
or
|
or log._default_loglevel
|
||||||
log._default_loglevel
|
).upper()
|
||||||
)
|
|
||||||
loglevel: str = loglevel.upper()
|
|
||||||
|
|
||||||
if (
|
if (
|
||||||
debug_mode
|
debug_mode
|
||||||
|
|
@ -322,10 +323,7 @@ async def open_root_actor(
|
||||||
)
|
)
|
||||||
|
|
||||||
assert loglevel
|
assert loglevel
|
||||||
_log = log.get_console_log(
|
_log = log.get_console_log(loglevel)
|
||||||
level=loglevel,
|
|
||||||
name='tractor',
|
|
||||||
)
|
|
||||||
assert _log
|
assert _log
|
||||||
|
|
||||||
# TODO: factor this into `.devx._stackscope`!!
|
# TODO: factor this into `.devx._stackscope`!!
|
||||||
|
|
@ -382,17 +380,10 @@ async def open_root_actor(
|
||||||
addr,
|
addr,
|
||||||
)
|
)
|
||||||
|
|
||||||
if tpt_bind_addrs is None:
|
trans_bind_addrs: list[UnwrappedAddress] = []
|
||||||
tpt_bind_addrs: list[Address] = []
|
|
||||||
else:
|
|
||||||
input_bind_addrs = list(tpt_bind_addrs)
|
|
||||||
tpt_bind_addrs: list[Address] = []
|
|
||||||
for addr in input_bind_addrs:
|
|
||||||
addr: Address = wrap_address(addr)
|
|
||||||
tpt_bind_addrs.append(addr)
|
|
||||||
|
|
||||||
# ------ NON-REGISTRAR ------
|
# Create a new local root-actor instance which IS NOT THE
|
||||||
# create a new root-actor instance.
|
# REGISTRAR
|
||||||
if ponged_addrs:
|
if ponged_addrs:
|
||||||
if ensure_registry:
|
if ensure_registry:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
|
|
@ -400,7 +391,7 @@ async def open_root_actor(
|
||||||
'registry socket(s) already bound'
|
'registry socket(s) already bound'
|
||||||
)
|
)
|
||||||
|
|
||||||
# we were able to connect to a registrar
|
# we were able to connect to an arbiter
|
||||||
logger.info(
|
logger.info(
|
||||||
f'Registry(s) seem(s) to exist @ {ponged_addrs}'
|
f'Registry(s) seem(s) to exist @ {ponged_addrs}'
|
||||||
)
|
)
|
||||||
|
|
@ -417,26 +408,14 @@ async def open_root_actor(
|
||||||
# a new NON-registrar, ROOT-actor.
|
# a new NON-registrar, ROOT-actor.
|
||||||
#
|
#
|
||||||
# XXX INSTEAD, bind random addrs using the same tpt
|
# XXX INSTEAD, bind random addrs using the same tpt
|
||||||
# proto if not already provided.
|
# proto.
|
||||||
if not tpt_bind_addrs:
|
|
||||||
for addr in ponged_addrs:
|
for addr in ponged_addrs:
|
||||||
tpt_bind_addrs.append(
|
trans_bind_addrs.append(
|
||||||
# XXX, these are `Address` NOT `UnwrappedAddress`.
|
|
||||||
#
|
|
||||||
# NOTE, in the case of posix/berkley socket
|
|
||||||
# protos we allocate port=0 such that the system
|
|
||||||
# allocates a random value at bind time; this
|
|
||||||
# happens in the `.ipc.*` stack's backend.
|
|
||||||
addr.get_random(
|
addr.get_random(
|
||||||
bindspace=addr.bindspace,
|
bindspace=addr.bindspace,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
header: str = '-> Contacting existing registry @ '
|
|
||||||
|
|
||||||
# ------ REGISTRAR ------
|
|
||||||
# create a new "registry providing" root-actor instance.
|
|
||||||
#
|
|
||||||
# Start this local actor as the "registrar", aka a regular
|
# Start this local actor as the "registrar", aka a regular
|
||||||
# actor who manages the local registry of "mailboxes" of
|
# actor who manages the local registry of "mailboxes" of
|
||||||
# other process-tree-local sub-actors.
|
# other process-tree-local sub-actors.
|
||||||
|
|
@ -445,11 +424,7 @@ async def open_root_actor(
|
||||||
# following init steps are taken:
|
# following init steps are taken:
|
||||||
# - the tranport layer server is bound to each addr
|
# - the tranport layer server is bound to each addr
|
||||||
# pair defined in provided registry_addrs, or the default.
|
# pair defined in provided registry_addrs, or the default.
|
||||||
tpt_bind_addrs = list(set(
|
trans_bind_addrs = uw_reg_addrs
|
||||||
tpt_bind_addrs
|
|
||||||
+
|
|
||||||
[wrap_address(a) for a in uw_reg_addrs]
|
|
||||||
))
|
|
||||||
|
|
||||||
# - it is normally desirable for any registrar to stay up
|
# - it is normally desirable for any registrar to stay up
|
||||||
# indefinitely until either all registered (child/sub)
|
# indefinitely until either all registered (child/sub)
|
||||||
|
|
@ -460,7 +435,8 @@ async def open_root_actor(
|
||||||
# https://github.com/goodboy/tractor/pull/348
|
# https://github.com/goodboy/tractor/pull/348
|
||||||
# https://github.com/goodboy/tractor/issues/296
|
# https://github.com/goodboy/tractor/issues/296
|
||||||
|
|
||||||
actor = Registrar(
|
# TODO: rename as `RootActor` or is that even necessary?
|
||||||
|
actor = _runtime.Arbiter(
|
||||||
name=name or 'registrar',
|
name=name or 'registrar',
|
||||||
uuid=mk_uuid(),
|
uuid=mk_uuid(),
|
||||||
registry_addrs=registry_addrs,
|
registry_addrs=registry_addrs,
|
||||||
|
|
@ -468,10 +444,19 @@ async def open_root_actor(
|
||||||
enable_modules=enable_modules,
|
enable_modules=enable_modules,
|
||||||
)
|
)
|
||||||
# XXX, in case the root actor runtime was actually run from
|
# XXX, in case the root actor runtime was actually run from
|
||||||
# `tractor.to_asyncio.run_as_asyncio_guest()` and NOT
|
# `tractor.to_asyncio.run_as_asyncio_guest()` and NOt
|
||||||
# `.trio.run()`.
|
# `.trio.run()`.
|
||||||
actor._infected_aio = _state._runtime_vars['_is_infected_aio']
|
actor._infected_aio = _state._runtime_vars['_is_infected_aio']
|
||||||
header: str = '-> Opening new registry @ '
|
|
||||||
|
# NOTE, only set the loopback addr for the
|
||||||
|
# process-tree-global "root" mailbox since all sub-actors
|
||||||
|
# should be able to speak to their root actor over that
|
||||||
|
# channel.
|
||||||
|
raddrs: list[Address] = _state._runtime_vars['_root_addrs']
|
||||||
|
raddrs.extend(trans_bind_addrs)
|
||||||
|
# TODO, remove once we have also removed all usage;
|
||||||
|
# eventually all (root-)registry apis should expect > 1 addr.
|
||||||
|
_state._runtime_vars['_root_mailbox'] = raddrs[0]
|
||||||
|
|
||||||
# Start up main task set via core actor-runtime nurseries.
|
# Start up main task set via core actor-runtime nurseries.
|
||||||
try:
|
try:
|
||||||
|
|
@ -483,7 +468,7 @@ async def open_root_actor(
|
||||||
report: str = f'Starting actor-runtime for {actor.aid.reprol()!r}\n'
|
report: str = f'Starting actor-runtime for {actor.aid.reprol()!r}\n'
|
||||||
if reg_addrs := actor.registry_addrs:
|
if reg_addrs := actor.registry_addrs:
|
||||||
report += (
|
report += (
|
||||||
header
|
'-> Opening new registry @ '
|
||||||
+
|
+
|
||||||
'\n'.join(
|
'\n'.join(
|
||||||
f'{addr}' for addr in reg_addrs
|
f'{addr}' for addr in reg_addrs
|
||||||
|
|
@ -509,39 +494,14 @@ async def open_root_actor(
|
||||||
# "actor runtime" primitives are SC-compat and thus all
|
# "actor runtime" primitives are SC-compat and thus all
|
||||||
# transitively spawned actors/processes must be as
|
# transitively spawned actors/processes must be as
|
||||||
# well.
|
# well.
|
||||||
accept_addrs: list[UnwrappedAddress]
|
await root_tn.start(
|
||||||
reg_addrs: list[UnwrappedAddress]
|
|
||||||
(
|
|
||||||
accept_addrs,
|
|
||||||
reg_addrs,
|
|
||||||
) = await root_tn.start(
|
|
||||||
partial(
|
partial(
|
||||||
_runtime.async_main,
|
_runtime.async_main,
|
||||||
actor,
|
actor,
|
||||||
accept_addrs=tpt_bind_addrs,
|
accept_addrs=trans_bind_addrs,
|
||||||
parent_addr=None
|
parent_addr=None
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
# NOTE, only set a local-host addr (i.e. like
|
|
||||||
# `lo`-loopback for TCP) for the process-tree-global
|
|
||||||
# "root"-process (its tree-wide "mailbox") since all
|
|
||||||
# sub-actors should be able to speak to their root
|
|
||||||
# actor over that channel.
|
|
||||||
#
|
|
||||||
# ?TODO, per-OS non-network-proto alt options?
|
|
||||||
# -[ ] on linux we should be able to always use UDS?
|
|
||||||
#
|
|
||||||
raddrs: list[UnwrappedAddress] = _state._runtime_vars['_root_addrs']
|
|
||||||
raddrs.extend(
|
|
||||||
accept_addrs,
|
|
||||||
)
|
|
||||||
# TODO, remove once we have also removed all usage;
|
|
||||||
# eventually all (root-)registry apis should expect > 1 addr.
|
|
||||||
_state._runtime_vars['_root_mailbox'] = raddrs[0]
|
|
||||||
# if 'chart' in actor.aid.name:
|
|
||||||
# from tractor.devx import mk_pdb
|
|
||||||
# mk_pdb().set_trace()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield actor
|
yield actor
|
||||||
except (
|
except (
|
||||||
|
|
@ -623,13 +583,6 @@ async def open_root_actor(
|
||||||
):
|
):
|
||||||
_state._runtime_vars['_debug_mode'] = False
|
_state._runtime_vars['_debug_mode'] = False
|
||||||
|
|
||||||
# !XXX, clear ALL prior contact info state, this is MEGA
|
|
||||||
# important if you are opening the runtime multiple times
|
|
||||||
# from the same parent process (like in our test
|
|
||||||
# harness)!
|
|
||||||
_state._runtime_vars['_root_addrs'].clear()
|
|
||||||
_state._runtime_vars['_root_mailbox'] = None
|
|
||||||
|
|
||||||
_state._current_actor = None
|
_state._current_actor = None
|
||||||
_state._last_actor_terminated = actor
|
_state._last_actor_terminated = actor
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -43,11 +43,11 @@ from trio import (
|
||||||
TaskStatus,
|
TaskStatus,
|
||||||
)
|
)
|
||||||
|
|
||||||
from ..ipc import Channel
|
from .ipc import Channel
|
||||||
from .._context import (
|
from ._context import (
|
||||||
Context,
|
Context,
|
||||||
)
|
)
|
||||||
from .._exceptions import (
|
from ._exceptions import (
|
||||||
ContextCancelled,
|
ContextCancelled,
|
||||||
RemoteActorError,
|
RemoteActorError,
|
||||||
ModuleNotExposed,
|
ModuleNotExposed,
|
||||||
|
|
@ -56,19 +56,19 @@ from .._exceptions import (
|
||||||
pack_error,
|
pack_error,
|
||||||
unpack_error,
|
unpack_error,
|
||||||
)
|
)
|
||||||
from ..trionics import (
|
from .trionics import (
|
||||||
collapse_eg,
|
collapse_eg,
|
||||||
is_multi_cancelled,
|
is_multi_cancelled,
|
||||||
maybe_raise_from_masking_exc,
|
maybe_raise_from_masking_exc,
|
||||||
)
|
)
|
||||||
from ..devx import (
|
from .devx import (
|
||||||
debug,
|
debug,
|
||||||
add_div,
|
add_div,
|
||||||
pformat as _pformat,
|
pformat as _pformat,
|
||||||
)
|
)
|
||||||
from . import _state
|
from . import _state
|
||||||
from ..log import get_logger
|
from .log import get_logger
|
||||||
from ..msg import (
|
from .msg import (
|
||||||
current_codec,
|
current_codec,
|
||||||
MsgCodec,
|
MsgCodec,
|
||||||
PayloadT,
|
PayloadT,
|
||||||
|
|
@ -252,8 +252,8 @@ async def _invoke_non_context(
|
||||||
):
|
):
|
||||||
log.warning(
|
log.warning(
|
||||||
'Failed to send RPC result?\n'
|
'Failed to send RPC result?\n'
|
||||||
f'|_{func}@{actor.aid.reprol()}() -> {ret_msg}\n\n'
|
f'|_{func}@{actor.uid}() -> {ret_msg}\n\n'
|
||||||
f'x=> peer: {chan.aid.reprol()}\n'
|
f'x=> peer: {chan.uid}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
@acm
|
@acm
|
||||||
|
|
@ -284,15 +284,6 @@ async def _errors_relayed_via_ipc(
|
||||||
try:
|
try:
|
||||||
yield # run RPC invoke body
|
yield # run RPC invoke body
|
||||||
|
|
||||||
# NOTE, never REPL any pseudo-expected tpt-disconnect.
|
|
||||||
except TransportClosed as err:
|
|
||||||
rpc_err = err
|
|
||||||
log.warning(
|
|
||||||
f'Tpt disconnect during remote-exc relay due to,\n'
|
|
||||||
f'{err!r}\n'
|
|
||||||
)
|
|
||||||
raise err
|
|
||||||
|
|
||||||
# box and ship RPC errors for wire-transit via
|
# box and ship RPC errors for wire-transit via
|
||||||
# the task's requesting parent IPC-channel.
|
# the task's requesting parent IPC-channel.
|
||||||
except (
|
except (
|
||||||
|
|
@ -336,15 +327,10 @@ async def _errors_relayed_via_ipc(
|
||||||
# recovery logic - the only case is some kind of
|
# recovery logic - the only case is some kind of
|
||||||
# strange bug in our transport layer itself? Going
|
# strange bug in our transport layer itself? Going
|
||||||
# to keep this open ended for now.
|
# to keep this open ended for now.
|
||||||
|
log.debug(
|
||||||
if _state.debug_mode():
|
'RPC task crashed, attempting to enter debugger\n'
|
||||||
log.exception(
|
f'|_{ctx}'
|
||||||
f'RPC task crashed!\n'
|
|
||||||
f'Attempting to enter debugger\n'
|
|
||||||
f'\n'
|
|
||||||
f'{ctx}'
|
|
||||||
)
|
)
|
||||||
|
|
||||||
entered_debug = await debug._maybe_enter_pm(
|
entered_debug = await debug._maybe_enter_pm(
|
||||||
err,
|
err,
|
||||||
api_frame=inspect.currentframe(),
|
api_frame=inspect.currentframe(),
|
||||||
|
|
@ -433,7 +419,7 @@ async def _errors_relayed_via_ipc(
|
||||||
# cancel scope will not have been inserted yet
|
# cancel scope will not have been inserted yet
|
||||||
if is_rpc:
|
if is_rpc:
|
||||||
log.warning(
|
log.warning(
|
||||||
'RPC task likely crashed or cancelled before start?\n'
|
'RPC task likely errored or cancelled before start?\n'
|
||||||
f'|_{ctx._task}\n'
|
f'|_{ctx._task}\n'
|
||||||
f' >> {ctx.repr_rpc}\n'
|
f' >> {ctx.repr_rpc}\n'
|
||||||
)
|
)
|
||||||
|
|
@ -698,7 +684,7 @@ async def _invoke(
|
||||||
# which cancels the scope presuming the input error
|
# which cancels the scope presuming the input error
|
||||||
# is not a `.cancel_acked` pleaser.
|
# is not a `.cancel_acked` pleaser.
|
||||||
if rpc_ctx_cs.cancelled_caught:
|
if rpc_ctx_cs.cancelled_caught:
|
||||||
our_uid: tuple = actor.aid.uid
|
our_uid: tuple = actor.uid
|
||||||
|
|
||||||
# first check for and raise any remote error
|
# first check for and raise any remote error
|
||||||
# before raising any context cancelled case
|
# before raising any context cancelled case
|
||||||
|
|
@ -730,7 +716,7 @@ async def _invoke(
|
||||||
# TODO: determine if the ctx peer task was the
|
# TODO: determine if the ctx peer task was the
|
||||||
# exact task which cancelled, vs. some other
|
# exact task which cancelled, vs. some other
|
||||||
# task in the same actor.
|
# task in the same actor.
|
||||||
elif canceller == ctx.chan.aid.uid:
|
elif canceller == ctx.chan.uid:
|
||||||
explain += f'its {ctx.peer_side!r}-side peer'
|
explain += f'its {ctx.peer_side!r}-side peer'
|
||||||
|
|
||||||
elif canceller == our_uid:
|
elif canceller == our_uid:
|
||||||
|
|
@ -825,7 +811,7 @@ async def _invoke(
|
||||||
# associated child isn't in debug any more
|
# associated child isn't in debug any more
|
||||||
await debug.maybe_wait_for_debugger()
|
await debug.maybe_wait_for_debugger()
|
||||||
ctx: Context = actor._contexts.pop((
|
ctx: Context = actor._contexts.pop((
|
||||||
chan.aid.uid,
|
chan.uid,
|
||||||
cid,
|
cid,
|
||||||
))
|
))
|
||||||
|
|
||||||
|
|
@ -876,9 +862,9 @@ async def _invoke(
|
||||||
)
|
)
|
||||||
|
|
||||||
logmeth(
|
logmeth(
|
||||||
f'{message}'
|
f'{message}\n'
|
||||||
f'\n'
|
f'\n'
|
||||||
f'{descr_str}'
|
f'{descr_str}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -914,11 +900,6 @@ async def try_ship_error_to_remote(
|
||||||
|
|
||||||
# XXX NOTE XXX in SC terms this is one of the worst things
|
# XXX NOTE XXX in SC terms this is one of the worst things
|
||||||
# that can happen and provides for a 2-general's dilemma..
|
# that can happen and provides for a 2-general's dilemma..
|
||||||
#
|
|
||||||
# FURHTER, we should never really have to handle these
|
|
||||||
# lowlevel excs from `trio` since the `Channel.send()` layers
|
|
||||||
# downward should be mostly wrapping such cases in a
|
|
||||||
# tpt-closed; the `.critical()` usage is warranted.
|
|
||||||
except (
|
except (
|
||||||
trio.ClosedResourceError,
|
trio.ClosedResourceError,
|
||||||
trio.BrokenResourceError,
|
trio.BrokenResourceError,
|
||||||
|
|
@ -927,7 +908,7 @@ async def try_ship_error_to_remote(
|
||||||
log.critical(
|
log.critical(
|
||||||
'IPC transport failure -> '
|
'IPC transport failure -> '
|
||||||
f'failed to ship error to {remote_descr}!\n\n'
|
f'failed to ship error to {remote_descr}!\n\n'
|
||||||
f'{type(msg)!r}[{msg.boxed_type_str}] X=> {channel.aid.uid}\n'
|
f'{type(msg)!r}[{msg.boxed_type_str}] X=> {channel.uid}\n'
|
||||||
f'\n'
|
f'\n'
|
||||||
# TODO: use `.msg.preetty_struct` for this!
|
# TODO: use `.msg.preetty_struct` for this!
|
||||||
f'{msg}\n'
|
f'{msg}\n'
|
||||||
|
|
@ -1005,7 +986,7 @@ async def process_messages(
|
||||||
async for msg in chan:
|
async for msg in chan:
|
||||||
log.transport( # type: ignore
|
log.transport( # type: ignore
|
||||||
f'IPC msg from peer\n'
|
f'IPC msg from peer\n'
|
||||||
f'<= {chan.aid.reprol()}\n\n'
|
f'<= {chan.uid}\n\n'
|
||||||
|
|
||||||
# TODO: use of the pprinting of structs is
|
# TODO: use of the pprinting of structs is
|
||||||
# FRAGILE and should prolly not be
|
# FRAGILE and should prolly not be
|
||||||
|
|
@ -1109,7 +1090,7 @@ async def process_messages(
|
||||||
except BaseException:
|
except BaseException:
|
||||||
log.exception(
|
log.exception(
|
||||||
'Failed to cancel task?\n'
|
'Failed to cancel task?\n'
|
||||||
f'<= canceller: {chan.aid.reprol()}\n'
|
f'<= canceller: {chan.uid}\n'
|
||||||
f' |_{chan}\n\n'
|
f' |_{chan}\n\n'
|
||||||
f'=> {actor}\n'
|
f'=> {actor}\n'
|
||||||
f' |_cid: {target_cid}\n'
|
f' |_cid: {target_cid}\n'
|
||||||
|
|
@ -1264,7 +1245,7 @@ async def process_messages(
|
||||||
|
|
||||||
log.transport(
|
log.transport(
|
||||||
'Waiting on next IPC msg from\n'
|
'Waiting on next IPC msg from\n'
|
||||||
f'peer: {chan.aid.reprol()}\n'
|
f'peer: {chan.uid}\n'
|
||||||
f'|_{chan}\n'
|
f'|_{chan}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -1313,10 +1294,12 @@ async def process_messages(
|
||||||
f'peer IPC channel closed abruptly?\n'
|
f'peer IPC channel closed abruptly?\n'
|
||||||
f'\n'
|
f'\n'
|
||||||
f'<=x[\n'
|
f'<=x[\n'
|
||||||
f'{chan}\n'
|
f' {chan}\n'
|
||||||
|
f' |_{chan.raddr}\n\n'
|
||||||
)
|
)
|
||||||
+
|
+
|
||||||
tc.message
|
tc.message
|
||||||
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# transport **WAS** disconnected
|
# transport **WAS** disconnected
|
||||||
|
|
@ -1339,8 +1322,8 @@ async def process_messages(
|
||||||
match err:
|
match err:
|
||||||
case ContextCancelled():
|
case ContextCancelled():
|
||||||
log.cancel(
|
log.cancel(
|
||||||
f'Actor: {actor.aid.reprol()!r} is ctxc with,\n'
|
f'Actor: {actor.uid} was context-cancelled with,\n'
|
||||||
f'{str(err)}'
|
f'str(err)'
|
||||||
)
|
)
|
||||||
case _:
|
case _:
|
||||||
log.exception("Actor errored:")
|
log.exception("Actor errored:")
|
||||||
|
|
@ -83,47 +83,46 @@ from tractor.msg import (
|
||||||
pretty_struct,
|
pretty_struct,
|
||||||
types as msgtypes,
|
types as msgtypes,
|
||||||
)
|
)
|
||||||
from ..trionics import (
|
from .trionics import (
|
||||||
collapse_eg,
|
collapse_eg,
|
||||||
maybe_open_nursery,
|
maybe_open_nursery,
|
||||||
)
|
)
|
||||||
from ..ipc import (
|
from .ipc import (
|
||||||
Channel,
|
Channel,
|
||||||
# IPCServer, # causes cycles atm..
|
# IPCServer, # causes cycles atm..
|
||||||
_server,
|
_server,
|
||||||
)
|
)
|
||||||
from ..discovery._addr import (
|
from ._addr import (
|
||||||
UnwrappedAddress,
|
UnwrappedAddress,
|
||||||
Address,
|
Address,
|
||||||
# default_lo_addrs,
|
# default_lo_addrs,
|
||||||
get_address_cls,
|
get_address_cls,
|
||||||
wrap_address,
|
wrap_address,
|
||||||
)
|
)
|
||||||
from .._context import (
|
from ._context import (
|
||||||
mk_context,
|
mk_context,
|
||||||
Context,
|
Context,
|
||||||
)
|
)
|
||||||
from ..log import get_logger
|
from .log import get_logger
|
||||||
from .._exceptions import (
|
from ._exceptions import (
|
||||||
ContextCancelled,
|
ContextCancelled,
|
||||||
InternalError,
|
InternalError,
|
||||||
ModuleNotExposed,
|
ModuleNotExposed,
|
||||||
MsgTypeError,
|
MsgTypeError,
|
||||||
unpack_error,
|
unpack_error,
|
||||||
)
|
)
|
||||||
from ..devx import (
|
from .devx import (
|
||||||
debug,
|
debug,
|
||||||
pformat as _pformat
|
pformat as _pformat
|
||||||
)
|
)
|
||||||
from ..discovery._api import get_registry
|
from ._discovery import get_registry
|
||||||
from ._portal import Portal
|
from ._portal import Portal
|
||||||
from . import _state
|
from . import _state
|
||||||
from ..spawn import _mp_fixup_main
|
from . import _mp_fixup_main
|
||||||
from ..spawn._mp_fixup_main import ParentMainData
|
|
||||||
from . import _rpc
|
from . import _rpc
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ._supervise import ActorNursery # noqa
|
from ._supervise import ActorNursery
|
||||||
from trio._channel import MemoryChannelState
|
from trio._channel import MemoryChannelState
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -148,8 +147,6 @@ def get_mod_nsps2fps(mod_ns_paths: list[str]) -> dict[str, str]:
|
||||||
return nsp2fp
|
return nsp2fp
|
||||||
|
|
||||||
|
|
||||||
_bp = False
|
|
||||||
|
|
||||||
class Actor:
|
class Actor:
|
||||||
'''
|
'''
|
||||||
The fundamental "runtime" concurrency primitive.
|
The fundamental "runtime" concurrency primitive.
|
||||||
|
|
@ -176,29 +173,13 @@ class Actor:
|
||||||
dialog.
|
dialog.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
is_registrar: bool = False
|
# ugh, we need to get rid of this and replace with a "registry" sys
|
||||||
|
# https://github.com/goodboy/tractor/issues/216
|
||||||
|
is_arbiter: bool = False
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def is_arbiter(self) -> bool:
|
def is_registrar(self) -> bool:
|
||||||
'''
|
return self.is_arbiter
|
||||||
Deprecated, use `.is_registrar`.
|
|
||||||
|
|
||||||
'''
|
|
||||||
warnings.warn(
|
|
||||||
'`Actor.is_arbiter` is deprecated.\n'
|
|
||||||
'Use `.is_registrar` instead.',
|
|
||||||
DeprecationWarning,
|
|
||||||
stacklevel=2,
|
|
||||||
)
|
|
||||||
return self.is_registrar
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_root(self) -> bool:
|
|
||||||
'''
|
|
||||||
This actor is the parent most in the tree?
|
|
||||||
|
|
||||||
'''
|
|
||||||
return _state.is_root_process()
|
|
||||||
|
|
||||||
msg_buffer_size: int = 2**6
|
msg_buffer_size: int = 2**6
|
||||||
|
|
||||||
|
|
@ -219,7 +200,7 @@ class Actor:
|
||||||
return self._ipc_server
|
return self._ipc_server
|
||||||
|
|
||||||
# Information about `__main__` from parent
|
# Information about `__main__` from parent
|
||||||
_parent_main_data: ParentMainData
|
_parent_main_data: dict[str, str]
|
||||||
_parent_chan_cs: CancelScope|None = None
|
_parent_chan_cs: CancelScope|None = None
|
||||||
_spawn_spec: msgtypes.SpawnSpec|None = None
|
_spawn_spec: msgtypes.SpawnSpec|None = None
|
||||||
|
|
||||||
|
|
@ -241,12 +222,12 @@ class Actor:
|
||||||
name: str,
|
name: str,
|
||||||
uuid: str,
|
uuid: str,
|
||||||
*,
|
*,
|
||||||
enable_modules: list[str] | None = None,
|
enable_modules: list[str] = [],
|
||||||
loglevel: str|None = None,
|
loglevel: str|None = None,
|
||||||
registry_addrs: list[Address]|None = None,
|
registry_addrs: list[Address]|None = None,
|
||||||
spawn_method: str|None = None,
|
spawn_method: str|None = None,
|
||||||
inherit_parent_main: bool = True,
|
|
||||||
|
|
||||||
|
# TODO: remove!
|
||||||
arbiter_addr: UnwrappedAddress|None = None,
|
arbiter_addr: UnwrappedAddress|None = None,
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
@ -267,15 +248,12 @@ class Actor:
|
||||||
self._cancel_called_by: tuple[str, tuple]|None = None
|
self._cancel_called_by: tuple[str, tuple]|None = None
|
||||||
self._cancel_called: bool = False
|
self._cancel_called: bool = False
|
||||||
|
|
||||||
# retrieve and store parent `__main__` data which
|
# retreive and store parent `__main__` data which
|
||||||
# will be passed to children
|
# will be passed to children
|
||||||
self._parent_main_data: ParentMainData = _mp_fixup_main._mp_figure_out_main(
|
self._parent_main_data = _mp_fixup_main._mp_figure_out_main()
|
||||||
inherit_parent_main=inherit_parent_main,
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO? only add this when `is_debug_mode() == True` no?
|
# TODO? only add this when `is_debug_mode() == True` no?
|
||||||
# always include debugging tools module
|
# always include debugging tools module
|
||||||
enable_modules = list(enable_modules or [])
|
|
||||||
if _state.is_root_process():
|
if _state.is_root_process():
|
||||||
enable_modules.append('tractor.devx.debug._tty_lock')
|
enable_modules.append('tractor.devx.debug._tty_lock')
|
||||||
|
|
||||||
|
|
@ -294,13 +272,11 @@ class Actor:
|
||||||
stacklevel=2,
|
stacklevel=2,
|
||||||
)
|
)
|
||||||
|
|
||||||
registry_addrs: list[Address] = [
|
registry_addrs: list[Address] = [wrap_address(arbiter_addr)]
|
||||||
wrap_address(arbiter_addr)
|
|
||||||
]
|
|
||||||
|
|
||||||
# marked by the process spawning backend at startup
|
# marked by the process spawning backend at startup
|
||||||
# will be None for the parent most process started
|
# will be None for the parent most process started manually
|
||||||
# manually by the user (the "registrar")
|
# by the user (currently called the "arbiter")
|
||||||
self._spawn_method: str = spawn_method
|
self._spawn_method: str = spawn_method
|
||||||
|
|
||||||
# RPC state
|
# RPC state
|
||||||
|
|
@ -552,15 +528,11 @@ class Actor:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
try:
|
try:
|
||||||
if (
|
if self._spawn_method == 'trio':
|
||||||
self._spawn_method == 'trio'
|
parent_data = self._parent_main_data
|
||||||
and
|
|
||||||
(parent_data := self._parent_main_data)
|
|
||||||
):
|
|
||||||
if 'init_main_from_name' in parent_data:
|
if 'init_main_from_name' in parent_data:
|
||||||
_mp_fixup_main._fixup_main_from_name(
|
_mp_fixup_main._fixup_main_from_name(
|
||||||
parent_data['init_main_from_name'])
|
parent_data['init_main_from_name'])
|
||||||
|
|
||||||
elif 'init_main_from_path' in parent_data:
|
elif 'init_main_from_path' in parent_data:
|
||||||
_mp_fixup_main._fixup_main_from_path(
|
_mp_fixup_main._fixup_main_from_path(
|
||||||
parent_data['init_main_from_path'])
|
parent_data['init_main_from_path'])
|
||||||
|
|
@ -707,7 +679,7 @@ class Actor:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
# ?TODO, use Aid here as well?
|
# ?TODO, use Aid here as well?
|
||||||
actor_uid = chan.aid.uid
|
actor_uid = chan.uid
|
||||||
assert actor_uid
|
assert actor_uid
|
||||||
try:
|
try:
|
||||||
ctx = self._contexts[(
|
ctx = self._contexts[(
|
||||||
|
|
@ -717,7 +689,7 @@ class Actor:
|
||||||
)]
|
)]
|
||||||
log.debug(
|
log.debug(
|
||||||
f'Retreived cached IPC ctx for\n'
|
f'Retreived cached IPC ctx for\n'
|
||||||
f'peer: {chan.aid.uid}\n'
|
f'peer: {chan.uid}\n'
|
||||||
f'cid:{cid}\n'
|
f'cid:{cid}\n'
|
||||||
)
|
)
|
||||||
ctx._allow_overruns: bool = allow_overruns
|
ctx._allow_overruns: bool = allow_overruns
|
||||||
|
|
@ -734,7 +706,7 @@ class Actor:
|
||||||
except KeyError:
|
except KeyError:
|
||||||
log.debug(
|
log.debug(
|
||||||
f'Allocate new IPC ctx for\n'
|
f'Allocate new IPC ctx for\n'
|
||||||
f'peer: {chan.aid.uid}\n'
|
f'peer: {chan.uid}\n'
|
||||||
f'cid: {cid}\n'
|
f'cid: {cid}\n'
|
||||||
)
|
)
|
||||||
ctx = mk_context(
|
ctx = mk_context(
|
||||||
|
|
@ -780,7 +752,7 @@ class Actor:
|
||||||
|
|
||||||
'''
|
'''
|
||||||
cid: str = str(uuid.uuid4())
|
cid: str = str(uuid.uuid4())
|
||||||
assert chan.aid.uid
|
assert chan.uid
|
||||||
ctx = self.get_context(
|
ctx = self.get_context(
|
||||||
chan=chan,
|
chan=chan,
|
||||||
cid=cid,
|
cid=cid,
|
||||||
|
|
@ -807,12 +779,12 @@ class Actor:
|
||||||
ns=ns,
|
ns=ns,
|
||||||
func=func,
|
func=func,
|
||||||
kwargs=kwargs,
|
kwargs=kwargs,
|
||||||
uid=self.aid.uid, # <- !TODO use .aid!
|
uid=self.uid,
|
||||||
cid=cid,
|
cid=cid,
|
||||||
)
|
)
|
||||||
log.runtime(
|
log.runtime(
|
||||||
'Sending RPC `Start`\n\n'
|
'Sending RPC `Start`\n\n'
|
||||||
f'=> peer: {chan.aid.uid}\n'
|
f'=> peer: {chan.uid}\n'
|
||||||
f' |_ {ns}.{func}({kwargs})\n\n'
|
f' |_ {ns}.{func}({kwargs})\n\n'
|
||||||
|
|
||||||
f'{pretty_struct.pformat(msg)}'
|
f'{pretty_struct.pformat(msg)}'
|
||||||
|
|
@ -923,7 +895,7 @@ class Actor:
|
||||||
# TODO! -[ ] another `Struct` for rtvs..
|
# TODO! -[ ] another `Struct` for rtvs..
|
||||||
rvs: dict[str, Any] = spawnspec._runtime_vars
|
rvs: dict[str, Any] = spawnspec._runtime_vars
|
||||||
if rvs['_debug_mode']:
|
if rvs['_debug_mode']:
|
||||||
from ..devx import (
|
from .devx import (
|
||||||
enable_stack_on_sig,
|
enable_stack_on_sig,
|
||||||
maybe_init_greenback,
|
maybe_init_greenback,
|
||||||
)
|
)
|
||||||
|
|
@ -987,21 +959,6 @@ class Actor:
|
||||||
|
|
||||||
rvs['_is_root'] = False # obvi XD
|
rvs['_is_root'] = False # obvi XD
|
||||||
|
|
||||||
# TODO, remove! left in just while protoing init fix!
|
|
||||||
# global _bp
|
|
||||||
# if (
|
|
||||||
# 'chart' in self.aid.name
|
|
||||||
# and
|
|
||||||
# isinstance(
|
|
||||||
# rvs['_root_addrs'][0],
|
|
||||||
# dict,
|
|
||||||
# )
|
|
||||||
# and
|
|
||||||
# not _bp
|
|
||||||
# ):
|
|
||||||
# _bp = True
|
|
||||||
# breakpoint()
|
|
||||||
|
|
||||||
_state._runtime_vars.update(rvs)
|
_state._runtime_vars.update(rvs)
|
||||||
|
|
||||||
# `SpawnSpec.reg_addrs`
|
# `SpawnSpec.reg_addrs`
|
||||||
|
|
@ -1260,7 +1217,7 @@ class Actor:
|
||||||
'Cancel request for invalid RPC task.\n'
|
'Cancel request for invalid RPC task.\n'
|
||||||
'The task likely already completed or was never started!\n\n'
|
'The task likely already completed or was never started!\n\n'
|
||||||
f'<= canceller: {requesting_aid}\n'
|
f'<= canceller: {requesting_aid}\n'
|
||||||
f'=> {cid}@{parent_chan.aid.uid}\n'
|
f'=> {cid}@{parent_chan.uid}\n'
|
||||||
f' |_{parent_chan}\n'
|
f' |_{parent_chan}\n'
|
||||||
)
|
)
|
||||||
return True
|
return True
|
||||||
|
|
@ -1379,7 +1336,7 @@ class Actor:
|
||||||
# - `Channel.maddr() -> str:` obvi!
|
# - `Channel.maddr() -> str:` obvi!
|
||||||
# - `Context.maddr() -> str:`
|
# - `Context.maddr() -> str:`
|
||||||
tasks_str += (
|
tasks_str += (
|
||||||
f' |_@ /ip4/tcp/cid="{ctx.cid[-16:]} .."\n'
|
f' |_@ /ipv4/tcp/cid="{ctx.cid[-16:]} .."\n'
|
||||||
f' |>> {ctx._nsf}() -> dict:\n'
|
f' |>> {ctx._nsf}() -> dict:\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -1397,7 +1354,7 @@ class Actor:
|
||||||
f'Cancelling {descr} RPC tasks\n\n'
|
f'Cancelling {descr} RPC tasks\n\n'
|
||||||
f'<=c) {req_aid} [canceller]\n'
|
f'<=c) {req_aid} [canceller]\n'
|
||||||
f'{rent_chan_repr}'
|
f'{rent_chan_repr}'
|
||||||
f'c)=> {self.aid.uid} [cancellee]\n'
|
f'c)=> {self.uid} [cancellee]\n'
|
||||||
f' |_{self} [with {len(tasks)} tasks]\n'
|
f' |_{self} [with {len(tasks)} tasks]\n'
|
||||||
# f' |_tasks: {len(tasks)}\n'
|
# f' |_tasks: {len(tasks)}\n'
|
||||||
# f'{tasks_str}'
|
# f'{tasks_str}'
|
||||||
|
|
@ -1498,12 +1455,7 @@ async def async_main(
|
||||||
# be False when running as root actor and True when as
|
# be False when running as root actor and True when as
|
||||||
# a subactor.
|
# a subactor.
|
||||||
parent_addr: UnwrappedAddress|None = None,
|
parent_addr: UnwrappedAddress|None = None,
|
||||||
task_status: TaskStatus[
|
task_status: TaskStatus[None] = trio.TASK_STATUS_IGNORED,
|
||||||
tuple[
|
|
||||||
list[UnwrappedAddress], # accept_addrs
|
|
||||||
list[UnwrappedAddress], # reg_addrs
|
|
||||||
]
|
|
||||||
] = trio.TASK_STATUS_IGNORED,
|
|
||||||
|
|
||||||
) -> None:
|
) -> None:
|
||||||
'''
|
'''
|
||||||
|
|
@ -1564,15 +1516,7 @@ async def async_main(
|
||||||
addr: Address = transport_cls.get_random()
|
addr: Address = transport_cls.get_random()
|
||||||
accept_addrs.append(addr.unwrap())
|
accept_addrs.append(addr.unwrap())
|
||||||
|
|
||||||
# XXX, either passed in by caller or delivered
|
assert accept_addrs
|
||||||
# in post spawn-spec handshake for subs.
|
|
||||||
if not accept_addrs:
|
|
||||||
raise RuntimeError(
|
|
||||||
f'No tpt bind addresses provided to actor!?\n'
|
|
||||||
f'parent_addr={parent_addr!r}\n'
|
|
||||||
f'accept_addrs={accept_addrs!r}\n'
|
|
||||||
f'enable_transports={enable_transports!r}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
ya_root_tn: bool = bool(actor._root_tn)
|
ya_root_tn: bool = bool(actor._root_tn)
|
||||||
ya_service_tn: bool = bool(actor._service_tn)
|
ya_service_tn: bool = bool(actor._service_tn)
|
||||||
|
|
@ -1680,7 +1624,7 @@ async def async_main(
|
||||||
# TODO, just read direct from ipc_server?
|
# TODO, just read direct from ipc_server?
|
||||||
accept_addrs: list[UnwrappedAddress] = actor.accept_addrs
|
accept_addrs: list[UnwrappedAddress] = actor.accept_addrs
|
||||||
|
|
||||||
# Register with the registrar if we're told its addr
|
# Register with the arbiter if we're told its addr
|
||||||
log.runtime(
|
log.runtime(
|
||||||
f'Registering `{actor.name}` => {pformat(accept_addrs)}\n'
|
f'Registering `{actor.name}` => {pformat(accept_addrs)}\n'
|
||||||
# ^-TODO-^ we should instead show the maddr here^^
|
# ^-TODO-^ we should instead show the maddr here^^
|
||||||
|
|
@ -1690,7 +1634,6 @@ async def async_main(
|
||||||
# if addresses point to the same actor..
|
# if addresses point to the same actor..
|
||||||
# So we need a way to detect that? maybe iterate
|
# So we need a way to detect that? maybe iterate
|
||||||
# only on unique actor uids?
|
# only on unique actor uids?
|
||||||
addr: UnwrappedAddress
|
|
||||||
for addr in actor.reg_addrs:
|
for addr in actor.reg_addrs:
|
||||||
try:
|
try:
|
||||||
waddr = wrap_address(addr)
|
waddr = wrap_address(addr)
|
||||||
|
|
@ -1699,9 +1642,7 @@ async def async_main(
|
||||||
await debug.pause()
|
await debug.pause()
|
||||||
|
|
||||||
# !TODO, get rid of the local-portal crap XD
|
# !TODO, get rid of the local-portal crap XD
|
||||||
reg_portal: Portal
|
|
||||||
async with get_registry(addr) as reg_portal:
|
async with get_registry(addr) as reg_portal:
|
||||||
accept_addr: UnwrappedAddress
|
|
||||||
for accept_addr in accept_addrs:
|
for accept_addr in accept_addrs:
|
||||||
accept_addr = wrap_address(accept_addr)
|
accept_addr = wrap_address(accept_addr)
|
||||||
|
|
||||||
|
|
@ -1711,18 +1652,14 @@ async def async_main(
|
||||||
await reg_portal.run_from_ns(
|
await reg_portal.run_from_ns(
|
||||||
'self',
|
'self',
|
||||||
'register_actor',
|
'register_actor',
|
||||||
uid=actor.aid.uid,
|
uid=actor.uid,
|
||||||
addr=accept_addr.unwrap(),
|
addr=accept_addr.unwrap(),
|
||||||
)
|
)
|
||||||
|
|
||||||
is_registered: bool = True
|
is_registered: bool = True
|
||||||
|
|
||||||
# init steps complete, deliver IPC-server and
|
# init steps complete
|
||||||
# registrar addrs back to caller.
|
task_status.started()
|
||||||
task_status.started((
|
|
||||||
accept_addrs,
|
|
||||||
actor.reg_addrs,
|
|
||||||
))
|
|
||||||
|
|
||||||
# Begin handling our new connection back to our
|
# Begin handling our new connection back to our
|
||||||
# parent. This is done last since we don't want to
|
# parent. This is done last since we don't want to
|
||||||
|
|
@ -1782,11 +1719,9 @@ async def async_main(
|
||||||
# always!
|
# always!
|
||||||
match internal_err:
|
match internal_err:
|
||||||
case ContextCancelled():
|
case ContextCancelled():
|
||||||
reprol: str = actor.aid.reprol()
|
|
||||||
log.cancel(
|
log.cancel(
|
||||||
f'Actor {reprol!r} was task-ctx-cancelled with,\n'
|
f'Actor: {actor.uid} was task-context-cancelled with,\n'
|
||||||
f'\n'
|
f'str(internal_err)'
|
||||||
f'{internal_err!r}'
|
|
||||||
)
|
)
|
||||||
case _:
|
case _:
|
||||||
log.exception(
|
log.exception(
|
||||||
|
|
@ -1845,41 +1780,7 @@ async def async_main(
|
||||||
and
|
and
|
||||||
not actor.is_registrar
|
not actor.is_registrar
|
||||||
):
|
):
|
||||||
failed_unreg: bool = False
|
failed: bool = False
|
||||||
rent_chan: Channel|None = actor._parent_chan
|
|
||||||
|
|
||||||
# XXX, detect whether the parent IS the registrar
|
|
||||||
# so we can FALL BACK to `_parent_chan` when a new
|
|
||||||
# connection attempt fails (e.g. UDS transport
|
|
||||||
# `os.unlink()`s the socket file during teardown).
|
|
||||||
#
|
|
||||||
# IMPORTANT: we do NOT eagerly reuse `_parent_chan`
|
|
||||||
# because it may still be carrying context/stream
|
|
||||||
# teardown protocol traffic — sending an
|
|
||||||
# `unregister_actor` RPC over it concurrently
|
|
||||||
# causes protocol-level conflicts. Instead we try
|
|
||||||
# a fresh `get_registry()` connection first and
|
|
||||||
# only fall back to the parent channel on failure.
|
|
||||||
#
|
|
||||||
# See `ipc._uds.close_listener()` for details on
|
|
||||||
# the UDS socket-file lifecycle.
|
|
||||||
parent_is_reg: bool = False
|
|
||||||
if (
|
|
||||||
rent_chan is not None
|
|
||||||
and
|
|
||||||
rent_chan.connected()
|
|
||||||
):
|
|
||||||
pchan_raddr: Address|None = rent_chan.raddr
|
|
||||||
if pchan_raddr is not None:
|
|
||||||
for reg_addr in actor.reg_addrs:
|
|
||||||
if (
|
|
||||||
pchan_raddr.unwrap()
|
|
||||||
==
|
|
||||||
tuple(reg_addr)
|
|
||||||
):
|
|
||||||
parent_is_reg = True
|
|
||||||
break
|
|
||||||
|
|
||||||
for addr in actor.reg_addrs:
|
for addr in actor.reg_addrs:
|
||||||
waddr = wrap_address(addr)
|
waddr = wrap_address(addr)
|
||||||
assert waddr.is_valid
|
assert waddr.is_valid
|
||||||
|
|
@ -1892,41 +1793,17 @@ async def async_main(
|
||||||
await reg_portal.run_from_ns(
|
await reg_portal.run_from_ns(
|
||||||
'self',
|
'self',
|
||||||
'unregister_actor',
|
'unregister_actor',
|
||||||
uid=actor.aid.uid,
|
uid=actor.uid
|
||||||
)
|
)
|
||||||
except OSError:
|
except OSError:
|
||||||
# Connection to registrar failed
|
failed = True
|
||||||
# (listener socket likely already
|
|
||||||
# closed/unlinked). Fall back to
|
|
||||||
# parent channel if parent IS the
|
|
||||||
# registrar.
|
|
||||||
if (
|
|
||||||
parent_is_reg
|
|
||||||
and
|
|
||||||
rent_chan.connected()
|
|
||||||
):
|
|
||||||
try:
|
|
||||||
reg_portal = Portal(rent_chan)
|
|
||||||
await reg_portal.run_from_ns(
|
|
||||||
'self',
|
|
||||||
'unregister_actor',
|
|
||||||
uid=actor.aid.uid,
|
|
||||||
)
|
|
||||||
except (
|
|
||||||
OSError,
|
|
||||||
trio.ClosedResourceError,
|
|
||||||
):
|
|
||||||
failed_unreg = True
|
|
||||||
else:
|
|
||||||
failed_unreg = True
|
|
||||||
|
|
||||||
if cs.cancelled_caught:
|
if cs.cancelled_caught:
|
||||||
failed_unreg = True
|
failed = True
|
||||||
|
|
||||||
if failed_unreg:
|
if failed:
|
||||||
teardown_report += (
|
teardown_report += (
|
||||||
f'-> Failed to unregister {actor.name} from '
|
f'-> Failed to unregister {actor.name} from '
|
||||||
f'registrar @ {addr}\n'
|
f'registar @ {addr}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
# Ensure all peers (actors connected to us as clients) are finished
|
# Ensure all peers (actors connected to us as clients) are finished
|
||||||
|
|
@ -1962,8 +1839,153 @@ async def async_main(
|
||||||
log.runtime(teardown_report)
|
log.runtime(teardown_report)
|
||||||
|
|
||||||
|
|
||||||
# Backward compat: class moved to discovery._registry
|
# TODO: rename to `Registry` and move to `.discovery._registry`!
|
||||||
from ..discovery._registry import (
|
class Arbiter(Actor):
|
||||||
Registrar as Registrar,
|
'''
|
||||||
)
|
A special registrar (and for now..) `Actor` who can contact all
|
||||||
Arbiter = Registrar
|
other actors within its immediate process tree and possibly keeps
|
||||||
|
a registry of others meant to be discoverable in a distributed
|
||||||
|
application. Normally the registrar is also the "root actor" and
|
||||||
|
thus always has access to the top-most-level actor (process)
|
||||||
|
nursery.
|
||||||
|
|
||||||
|
By default, the registrar is always initialized when and if no
|
||||||
|
other registrar socket addrs have been specified to runtime
|
||||||
|
init entry-points (such as `open_root_actor()` or
|
||||||
|
`open_nursery()`). Any time a new main process is launched (and
|
||||||
|
thus thus a new root actor created) and, no existing registrar
|
||||||
|
can be contacted at the provided `registry_addr`, then a new
|
||||||
|
one is always created; however, if one can be reached it is
|
||||||
|
used.
|
||||||
|
|
||||||
|
Normally a distributed app requires at least registrar per
|
||||||
|
logical host where for that given "host space" (aka localhost
|
||||||
|
IPC domain of addresses) it is responsible for making all other
|
||||||
|
host (local address) bound actors *discoverable* to external
|
||||||
|
actor trees running on remote hosts.
|
||||||
|
|
||||||
|
'''
|
||||||
|
is_arbiter = True
|
||||||
|
|
||||||
|
# TODO, implement this as a read on there existing a `._state` of
|
||||||
|
# some sort setup by whenever we impl this all as
|
||||||
|
# a `.discovery._registry.open_registry()` API
|
||||||
|
def is_registry(self) -> bool:
|
||||||
|
return self.is_arbiter
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
*args,
|
||||||
|
**kwargs,
|
||||||
|
) -> None:
|
||||||
|
|
||||||
|
self._registry: dict[
|
||||||
|
tuple[str, str],
|
||||||
|
UnwrappedAddress,
|
||||||
|
] = {}
|
||||||
|
self._waiters: dict[
|
||||||
|
str,
|
||||||
|
# either an event to sync to receiving an actor uid (which
|
||||||
|
# is filled in once the actor has sucessfully registered),
|
||||||
|
# or that uid after registry is complete.
|
||||||
|
list[trio.Event | tuple[str, str]]
|
||||||
|
] = {}
|
||||||
|
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
async def find_actor(
|
||||||
|
self,
|
||||||
|
name: str,
|
||||||
|
|
||||||
|
) -> UnwrappedAddress|None:
|
||||||
|
|
||||||
|
for uid, addr in self._registry.items():
|
||||||
|
if name in uid:
|
||||||
|
return addr
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def get_registry(
|
||||||
|
self
|
||||||
|
|
||||||
|
) -> dict[str, UnwrappedAddress]:
|
||||||
|
'''
|
||||||
|
Return current name registry.
|
||||||
|
|
||||||
|
This method is async to allow for cross-actor invocation.
|
||||||
|
|
||||||
|
'''
|
||||||
|
# NOTE: requires ``strict_map_key=False`` to the msgpack
|
||||||
|
# unpacker since we have tuples as keys (not this makes the
|
||||||
|
# arbiter suscetible to hashdos):
|
||||||
|
# https://github.com/msgpack/msgpack-python#major-breaking-changes-in-msgpack-10
|
||||||
|
return {
|
||||||
|
'.'.join(key): val
|
||||||
|
for key, val in self._registry.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
async def wait_for_actor(
|
||||||
|
self,
|
||||||
|
name: str,
|
||||||
|
|
||||||
|
) -> list[UnwrappedAddress]:
|
||||||
|
'''
|
||||||
|
Wait for a particular actor to register.
|
||||||
|
|
||||||
|
This is a blocking call if no actor by the provided name is currently
|
||||||
|
registered.
|
||||||
|
|
||||||
|
'''
|
||||||
|
addrs: list[UnwrappedAddress] = []
|
||||||
|
addr: UnwrappedAddress
|
||||||
|
|
||||||
|
mailbox_info: str = 'Actor registry contact infos:\n'
|
||||||
|
for uid, addr in self._registry.items():
|
||||||
|
mailbox_info += (
|
||||||
|
f'|_uid: {uid}\n'
|
||||||
|
f'|_addr: {addr}\n\n'
|
||||||
|
)
|
||||||
|
if name == uid[0]:
|
||||||
|
addrs.append(addr)
|
||||||
|
|
||||||
|
if not addrs:
|
||||||
|
waiter = trio.Event()
|
||||||
|
self._waiters.setdefault(name, []).append(waiter)
|
||||||
|
await waiter.wait()
|
||||||
|
|
||||||
|
for uid in self._waiters[name]:
|
||||||
|
if not isinstance(uid, trio.Event):
|
||||||
|
addrs.append(self._registry[uid])
|
||||||
|
|
||||||
|
log.runtime(mailbox_info)
|
||||||
|
return addrs
|
||||||
|
|
||||||
|
async def register_actor(
|
||||||
|
self,
|
||||||
|
uid: tuple[str, str],
|
||||||
|
addr: UnwrappedAddress
|
||||||
|
) -> None:
|
||||||
|
uid = name, hash = (str(uid[0]), str(uid[1]))
|
||||||
|
waddr: Address = wrap_address(addr)
|
||||||
|
if not waddr.is_valid:
|
||||||
|
# should never be 0-dynamic-os-alloc
|
||||||
|
await debug.pause()
|
||||||
|
|
||||||
|
self._registry[uid] = addr
|
||||||
|
|
||||||
|
# pop and signal all waiter events
|
||||||
|
events = self._waiters.pop(name, [])
|
||||||
|
self._waiters.setdefault(name, []).append(uid)
|
||||||
|
for event in events:
|
||||||
|
if isinstance(event, trio.Event):
|
||||||
|
event.set()
|
||||||
|
|
||||||
|
async def unregister_actor(
|
||||||
|
self,
|
||||||
|
uid: tuple[str, str]
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
uid = (str(uid[0]), str(uid[1]))
|
||||||
|
entry: tuple = self._registry.pop(uid, None)
|
||||||
|
if entry is None:
|
||||||
|
log.warning(f'Request to de-register {uid} failed?')
|
||||||
|
|
@ -0,0 +1,843 @@
|
||||||
|
# tractor: structured concurrent "actors".
|
||||||
|
# Copyright 2018-eternity Tyler Goodlet.
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
# You should have received a copy of the GNU Affero General Public License
|
||||||
|
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Machinery for actor process spawning using multiple backends.
|
||||||
|
|
||||||
|
"""
|
||||||
|
from __future__ import annotations
|
||||||
|
import multiprocessing as mp
|
||||||
|
import sys
|
||||||
|
import platform
|
||||||
|
from typing import (
|
||||||
|
Any,
|
||||||
|
Awaitable,
|
||||||
|
Literal,
|
||||||
|
Callable,
|
||||||
|
TypeVar,
|
||||||
|
TYPE_CHECKING,
|
||||||
|
)
|
||||||
|
|
||||||
|
import trio
|
||||||
|
from trio import TaskStatus
|
||||||
|
|
||||||
|
from .devx import (
|
||||||
|
debug,
|
||||||
|
pformat as _pformat
|
||||||
|
)
|
||||||
|
from tractor._state import (
|
||||||
|
current_actor,
|
||||||
|
is_main_process,
|
||||||
|
is_root_process,
|
||||||
|
debug_mode,
|
||||||
|
_runtime_vars,
|
||||||
|
)
|
||||||
|
from tractor.log import get_logger
|
||||||
|
from tractor._addr import UnwrappedAddress
|
||||||
|
from tractor._portal import Portal
|
||||||
|
from tractor._runtime import Actor
|
||||||
|
from tractor._entry import _mp_main
|
||||||
|
from tractor._exceptions import ActorFailure
|
||||||
|
from tractor.msg import (
|
||||||
|
types as msgtypes,
|
||||||
|
pretty_struct,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from ipc import (
|
||||||
|
_server,
|
||||||
|
Channel,
|
||||||
|
)
|
||||||
|
from ._supervise import ActorNursery
|
||||||
|
ProcessType = TypeVar('ProcessType', mp.Process, trio.Process)
|
||||||
|
|
||||||
|
|
||||||
|
log = get_logger('tractor')
|
||||||
|
|
||||||
|
# placeholder for an mp start context if so using that backend
|
||||||
|
_ctx: mp.context.BaseContext | None = None
|
||||||
|
SpawnMethodKey = Literal[
|
||||||
|
'trio', # supported on all platforms
|
||||||
|
'mp_spawn',
|
||||||
|
'mp_forkserver', # posix only
|
||||||
|
]
|
||||||
|
_spawn_method: SpawnMethodKey = 'trio'
|
||||||
|
|
||||||
|
|
||||||
|
if platform.system() == 'Windows':
|
||||||
|
|
||||||
|
_ctx = mp.get_context("spawn")
|
||||||
|
|
||||||
|
async def proc_waiter(proc: mp.Process) -> None:
|
||||||
|
await trio.lowlevel.WaitForSingleObject(proc.sentinel)
|
||||||
|
else:
|
||||||
|
# *NIX systems use ``trio`` primitives as our default as well
|
||||||
|
|
||||||
|
async def proc_waiter(proc: mp.Process) -> None:
|
||||||
|
await trio.lowlevel.wait_readable(proc.sentinel)
|
||||||
|
|
||||||
|
|
||||||
|
def try_set_start_method(
|
||||||
|
key: SpawnMethodKey
|
||||||
|
|
||||||
|
) -> mp.context.BaseContext | None:
|
||||||
|
'''
|
||||||
|
Attempt to set the method for process starting, aka the "actor
|
||||||
|
spawning backend".
|
||||||
|
|
||||||
|
If the desired method is not supported this function will error.
|
||||||
|
On Windows only the ``multiprocessing`` "spawn" method is offered
|
||||||
|
besides the default ``trio`` which uses async wrapping around
|
||||||
|
``subprocess.Popen``.
|
||||||
|
|
||||||
|
'''
|
||||||
|
import multiprocessing as mp
|
||||||
|
global _ctx
|
||||||
|
global _spawn_method
|
||||||
|
|
||||||
|
mp_methods = mp.get_all_start_methods()
|
||||||
|
if 'fork' in mp_methods:
|
||||||
|
# forking is incompatible with ``trio``s global task tree
|
||||||
|
mp_methods.remove('fork')
|
||||||
|
|
||||||
|
match key:
|
||||||
|
case 'mp_forkserver':
|
||||||
|
from . import _forkserver_override
|
||||||
|
_forkserver_override.override_stdlib()
|
||||||
|
_ctx = mp.get_context('forkserver')
|
||||||
|
|
||||||
|
case 'mp_spawn':
|
||||||
|
_ctx = mp.get_context('spawn')
|
||||||
|
|
||||||
|
case 'trio':
|
||||||
|
_ctx = None
|
||||||
|
|
||||||
|
case _:
|
||||||
|
raise ValueError(
|
||||||
|
f'Spawn method `{key}` is invalid!\n'
|
||||||
|
f'Please choose one of {SpawnMethodKey}'
|
||||||
|
)
|
||||||
|
|
||||||
|
_spawn_method = key
|
||||||
|
return _ctx
|
||||||
|
|
||||||
|
|
||||||
|
async def exhaust_portal(
|
||||||
|
|
||||||
|
portal: Portal,
|
||||||
|
actor: Actor
|
||||||
|
|
||||||
|
) -> Any:
|
||||||
|
'''
|
||||||
|
Pull final result from portal (assuming it has one).
|
||||||
|
|
||||||
|
If the main task is an async generator do our best to consume
|
||||||
|
what's left of it.
|
||||||
|
'''
|
||||||
|
__tracebackhide__ = True
|
||||||
|
try:
|
||||||
|
log.debug(
|
||||||
|
f'Waiting on final result from {actor.uid}'
|
||||||
|
)
|
||||||
|
|
||||||
|
# XXX: streams should never be reaped here since they should
|
||||||
|
# always be established and shutdown using a context manager api
|
||||||
|
final: Any = await portal.wait_for_result()
|
||||||
|
|
||||||
|
except (
|
||||||
|
Exception,
|
||||||
|
BaseExceptionGroup,
|
||||||
|
) as err:
|
||||||
|
# we reraise in the parent task via a ``BaseExceptionGroup``
|
||||||
|
return err
|
||||||
|
|
||||||
|
except trio.Cancelled as err:
|
||||||
|
# lol, of course we need this too ;P
|
||||||
|
# TODO: merge with above?
|
||||||
|
log.warning(
|
||||||
|
'Cancelled portal result waiter task:\n'
|
||||||
|
f'uid: {portal.channel.aid}\n'
|
||||||
|
f'error: {err}\n'
|
||||||
|
)
|
||||||
|
return err
|
||||||
|
|
||||||
|
else:
|
||||||
|
log.debug(
|
||||||
|
f'Returning final result from portal:\n'
|
||||||
|
f'uid: {portal.channel.aid}\n'
|
||||||
|
f'result: {final}\n'
|
||||||
|
)
|
||||||
|
return final
|
||||||
|
|
||||||
|
|
||||||
|
async def cancel_on_completion(
|
||||||
|
|
||||||
|
portal: Portal,
|
||||||
|
actor: Actor,
|
||||||
|
errors: dict[tuple[str, str], Exception],
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
'''
|
||||||
|
Cancel actor gracefully once its "main" portal's
|
||||||
|
result arrives.
|
||||||
|
|
||||||
|
Should only be called for actors spawned via the
|
||||||
|
`Portal.run_in_actor()` API.
|
||||||
|
|
||||||
|
=> and really this API will be deprecated and should be
|
||||||
|
re-implemented as a `.hilevel.one_shot_task_nursery()`..)
|
||||||
|
|
||||||
|
'''
|
||||||
|
# if this call errors we store the exception for later
|
||||||
|
# in ``errors`` which will be reraised inside
|
||||||
|
# an exception group and we still send out a cancel request
|
||||||
|
result: Any|Exception = await exhaust_portal(
|
||||||
|
portal,
|
||||||
|
actor,
|
||||||
|
)
|
||||||
|
if isinstance(result, Exception):
|
||||||
|
errors[actor.uid]: Exception = result
|
||||||
|
log.cancel(
|
||||||
|
'Cancelling subactor runtime due to error:\n\n'
|
||||||
|
f'Portal.cancel_actor() => {portal.channel.uid}\n\n'
|
||||||
|
f'error: {result}\n'
|
||||||
|
)
|
||||||
|
|
||||||
|
else:
|
||||||
|
log.runtime(
|
||||||
|
'Cancelling subactor gracefully:\n\n'
|
||||||
|
f'Portal.cancel_actor() => {portal.channel.uid}\n\n'
|
||||||
|
f'result: {result}\n'
|
||||||
|
)
|
||||||
|
|
||||||
|
# cancel the process now that we have a final result
|
||||||
|
await portal.cancel_actor()
|
||||||
|
|
||||||
|
|
||||||
|
async def hard_kill(
|
||||||
|
proc: trio.Process,
|
||||||
|
|
||||||
|
terminate_after: int = 1.6,
|
||||||
|
# NOTE: for mucking with `.pause()`-ing inside the runtime
|
||||||
|
# whilst also hacking on it XD
|
||||||
|
# terminate_after: int = 99999,
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
'''
|
||||||
|
Un-gracefully terminate an OS level `trio.Process` after timeout.
|
||||||
|
|
||||||
|
Used in 2 main cases:
|
||||||
|
|
||||||
|
- "unknown remote runtime state": a hanging/stalled actor that
|
||||||
|
isn't responding after sending a (graceful) runtime cancel
|
||||||
|
request via an IPC msg.
|
||||||
|
- "cancelled during spawn": a process who's actor runtime was
|
||||||
|
cancelled before full startup completed (such that
|
||||||
|
cancel-request-handling machinery was never fully
|
||||||
|
initialized) and thus a "cancel request msg" is never going
|
||||||
|
to be handled.
|
||||||
|
|
||||||
|
'''
|
||||||
|
log.cancel(
|
||||||
|
'Terminating sub-proc\n'
|
||||||
|
f'>x)\n'
|
||||||
|
f' |_{proc}\n'
|
||||||
|
)
|
||||||
|
# NOTE: this timeout used to do nothing since we were shielding
|
||||||
|
# the ``.wait()`` inside ``new_proc()`` which will pretty much
|
||||||
|
# never release until the process exits, now it acts as
|
||||||
|
# a hard-kill time ultimatum.
|
||||||
|
with trio.move_on_after(terminate_after) as cs:
|
||||||
|
|
||||||
|
# NOTE: code below was copied verbatim from the now deprecated
|
||||||
|
# (in 0.20.0) ``trio._subrocess.Process.aclose()``, orig doc
|
||||||
|
# string:
|
||||||
|
#
|
||||||
|
# Close any pipes we have to the process (both input and output)
|
||||||
|
# and wait for it to exit. If cancelled, kills the process and
|
||||||
|
# waits for it to finish exiting before propagating the
|
||||||
|
# cancellation.
|
||||||
|
#
|
||||||
|
# This code was originally triggred by ``proc.__aexit__()``
|
||||||
|
# but now must be called manually.
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
if proc.stdin is not None:
|
||||||
|
await proc.stdin.aclose()
|
||||||
|
if proc.stdout is not None:
|
||||||
|
await proc.stdout.aclose()
|
||||||
|
if proc.stderr is not None:
|
||||||
|
await proc.stderr.aclose()
|
||||||
|
try:
|
||||||
|
await proc.wait()
|
||||||
|
finally:
|
||||||
|
if proc.returncode is None:
|
||||||
|
proc.kill()
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
await proc.wait()
|
||||||
|
|
||||||
|
# XXX NOTE XXX: zombie squad dispatch:
|
||||||
|
# (should ideally never, but) If we do get here it means
|
||||||
|
# graceful termination of a process failed and we need to
|
||||||
|
# resort to OS level signalling to interrupt and cancel the
|
||||||
|
# (presumably stalled or hung) actor. Since we never allow
|
||||||
|
# zombies (as a feature) we ask the OS to do send in the
|
||||||
|
# removal swad as the last resort.
|
||||||
|
if cs.cancelled_caught:
|
||||||
|
|
||||||
|
# TODO? attempt at intermediary-rent-sub
|
||||||
|
# with child in debug lock?
|
||||||
|
# |_https://github.com/goodboy/tractor/issues/320
|
||||||
|
#
|
||||||
|
# if not is_root_process():
|
||||||
|
# log.warning(
|
||||||
|
# 'Attempting to acquire debug-REPL-lock before zombie reap!'
|
||||||
|
# )
|
||||||
|
# with trio.CancelScope(shield=True):
|
||||||
|
# async with debug.acquire_debug_lock(
|
||||||
|
# subactor_uid=current_actor().uid,
|
||||||
|
# ) as _ctx:
|
||||||
|
# log.warning(
|
||||||
|
# 'Acquired debug lock, child ready to be killed ??\n'
|
||||||
|
# )
|
||||||
|
|
||||||
|
# TODO: toss in the skynet-logo face as ascii art?
|
||||||
|
log.critical(
|
||||||
|
# 'Well, the #ZOMBIE_LORD_IS_HERE# to collect\n'
|
||||||
|
'#T-800 deployed to collect zombie B0\n'
|
||||||
|
f'>x)\n'
|
||||||
|
f' |_{proc}\n'
|
||||||
|
)
|
||||||
|
proc.kill()
|
||||||
|
|
||||||
|
|
||||||
|
async def soft_kill(
|
||||||
|
proc: ProcessType,
|
||||||
|
wait_func: Callable[
|
||||||
|
[ProcessType],
|
||||||
|
Awaitable,
|
||||||
|
],
|
||||||
|
portal: Portal,
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
'''
|
||||||
|
Wait for proc termination but **don't yet** teardown
|
||||||
|
std-streams since it will clobber any ongoing pdb REPL
|
||||||
|
session.
|
||||||
|
|
||||||
|
This is our "soft"/graceful, and thus itself also cancellable,
|
||||||
|
join/reap on an actor-runtime-in-process shutdown; it is
|
||||||
|
**not** the same as a "hard kill" via an OS signal (for that
|
||||||
|
see `.hard_kill()`).
|
||||||
|
|
||||||
|
'''
|
||||||
|
chan: Channel = portal.channel
|
||||||
|
peer_aid: msgtypes.Aid = chan.aid
|
||||||
|
try:
|
||||||
|
log.cancel(
|
||||||
|
f'Soft killing sub-actor via portal request\n'
|
||||||
|
f'\n'
|
||||||
|
f'c)=> {peer_aid.reprol()}@[{chan.maddr}]\n'
|
||||||
|
f' |_{proc}\n'
|
||||||
|
)
|
||||||
|
# wait on sub-proc to signal termination
|
||||||
|
await wait_func(proc)
|
||||||
|
|
||||||
|
except trio.Cancelled:
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
await debug.maybe_wait_for_debugger(
|
||||||
|
child_in_debug=_runtime_vars.get(
|
||||||
|
'_debug_mode', False
|
||||||
|
),
|
||||||
|
header_msg=(
|
||||||
|
'Delaying `soft_kill()` subproc reaper while debugger locked..\n'
|
||||||
|
),
|
||||||
|
# TODO: need a diff value then default?
|
||||||
|
# poll_steps=9999999,
|
||||||
|
)
|
||||||
|
|
||||||
|
# if cancelled during a soft wait, cancel the child
|
||||||
|
# actor before entering the hard reap sequence
|
||||||
|
# below. This means we try to do a graceful teardown
|
||||||
|
# via sending a cancel message before getting out
|
||||||
|
# zombie killing tools.
|
||||||
|
async with trio.open_nursery() as n:
|
||||||
|
n.cancel_scope.shield = True
|
||||||
|
|
||||||
|
async def cancel_on_proc_deth():
|
||||||
|
'''
|
||||||
|
"Cancel-the-cancel" request: if we detect that the
|
||||||
|
underlying sub-process exited prior to
|
||||||
|
a `Portal.cancel_actor()` call completing .
|
||||||
|
|
||||||
|
'''
|
||||||
|
await wait_func(proc)
|
||||||
|
n.cancel_scope.cancel()
|
||||||
|
|
||||||
|
# start a task to wait on the termination of the
|
||||||
|
# process by itself waiting on a (caller provided) wait
|
||||||
|
# function which should unblock when the target process
|
||||||
|
# has terminated.
|
||||||
|
n.start_soon(cancel_on_proc_deth)
|
||||||
|
|
||||||
|
# send the actor-runtime a cancel request.
|
||||||
|
await portal.cancel_actor()
|
||||||
|
|
||||||
|
if proc.poll() is None: # type: ignore
|
||||||
|
log.warning(
|
||||||
|
'Subactor still alive after cancel request?\n\n'
|
||||||
|
f'uid: {peer_aid}\n'
|
||||||
|
f'|_{proc}\n'
|
||||||
|
)
|
||||||
|
n.cancel_scope.cancel()
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
async def new_proc(
|
||||||
|
name: str,
|
||||||
|
actor_nursery: ActorNursery,
|
||||||
|
subactor: Actor,
|
||||||
|
errors: dict[tuple[str, str], Exception],
|
||||||
|
|
||||||
|
# passed through to actor main
|
||||||
|
bind_addrs: list[UnwrappedAddress],
|
||||||
|
parent_addr: UnwrappedAddress,
|
||||||
|
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||||
|
|
||||||
|
*,
|
||||||
|
|
||||||
|
infect_asyncio: bool = False,
|
||||||
|
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||||
|
proc_kwargs: dict[str, any] = {}
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
|
||||||
|
# lookup backend spawning target
|
||||||
|
target: Callable = _methods[_spawn_method]
|
||||||
|
|
||||||
|
# mark the new actor with the global spawn method
|
||||||
|
subactor._spawn_method = _spawn_method
|
||||||
|
|
||||||
|
await target(
|
||||||
|
name,
|
||||||
|
actor_nursery,
|
||||||
|
subactor,
|
||||||
|
errors,
|
||||||
|
bind_addrs,
|
||||||
|
parent_addr,
|
||||||
|
_runtime_vars, # run time vars
|
||||||
|
infect_asyncio=infect_asyncio,
|
||||||
|
task_status=task_status,
|
||||||
|
proc_kwargs=proc_kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def trio_proc(
|
||||||
|
name: str,
|
||||||
|
actor_nursery: ActorNursery,
|
||||||
|
subactor: Actor,
|
||||||
|
errors: dict[tuple[str, str], Exception],
|
||||||
|
|
||||||
|
# passed through to actor main
|
||||||
|
bind_addrs: list[UnwrappedAddress],
|
||||||
|
parent_addr: UnwrappedAddress,
|
||||||
|
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||||
|
*,
|
||||||
|
infect_asyncio: bool = False,
|
||||||
|
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||||
|
proc_kwargs: dict[str, any] = {}
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
'''
|
||||||
|
Create a new ``Process`` using a "spawn method" as (configured using
|
||||||
|
``try_set_start_method()``).
|
||||||
|
|
||||||
|
This routine should be started in a actor runtime task and the logic
|
||||||
|
here is to be considered the core supervision strategy.
|
||||||
|
|
||||||
|
'''
|
||||||
|
spawn_cmd = [
|
||||||
|
sys.executable,
|
||||||
|
"-m",
|
||||||
|
# Hardcode this (instead of using ``_child.__name__`` to avoid a
|
||||||
|
# double import warning: https://stackoverflow.com/a/45070583
|
||||||
|
"tractor._child",
|
||||||
|
# We provide the child's unique identifier on this exec/spawn
|
||||||
|
# line for debugging purposes when viewing the process tree from
|
||||||
|
# the OS; it otherwise can be passed via the parent channel if
|
||||||
|
# we prefer in the future (for privacy).
|
||||||
|
"--uid",
|
||||||
|
# TODO, how to pass this over "wire" encodings like
|
||||||
|
# cmdline args?
|
||||||
|
# -[ ] maybe we can add an `msgtypes.Aid.min_tuple()` ?
|
||||||
|
str(subactor.uid),
|
||||||
|
# Address the child must connect to on startup
|
||||||
|
"--parent_addr",
|
||||||
|
str(parent_addr)
|
||||||
|
]
|
||||||
|
|
||||||
|
if subactor.loglevel:
|
||||||
|
spawn_cmd += [
|
||||||
|
"--loglevel",
|
||||||
|
subactor.loglevel
|
||||||
|
]
|
||||||
|
# Tell child to run in guest mode on top of ``asyncio`` loop
|
||||||
|
if infect_asyncio:
|
||||||
|
spawn_cmd.append("--asyncio")
|
||||||
|
|
||||||
|
cancelled_during_spawn: bool = False
|
||||||
|
proc: trio.Process|None = None
|
||||||
|
ipc_server: _server.Server = actor_nursery._actor.ipc_server
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
proc: trio.Process = await trio.lowlevel.open_process(spawn_cmd, **proc_kwargs)
|
||||||
|
log.runtime(
|
||||||
|
f'Started new child subproc\n'
|
||||||
|
f'(>\n'
|
||||||
|
f' |_{proc}\n'
|
||||||
|
)
|
||||||
|
|
||||||
|
# wait for actor to spawn and connect back to us
|
||||||
|
# channel should have handshake completed by the
|
||||||
|
# local actor by the time we get a ref to it
|
||||||
|
event, chan = await ipc_server.wait_for_peer(
|
||||||
|
subactor.uid
|
||||||
|
)
|
||||||
|
|
||||||
|
except trio.Cancelled:
|
||||||
|
cancelled_during_spawn = True
|
||||||
|
# we may cancel before the child connects back in which
|
||||||
|
# case avoid clobbering the pdb tty.
|
||||||
|
if debug_mode():
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
# don't clobber an ongoing pdb
|
||||||
|
if is_root_process():
|
||||||
|
await debug.maybe_wait_for_debugger()
|
||||||
|
|
||||||
|
elif proc is not None:
|
||||||
|
async with debug.acquire_debug_lock(subactor.uid):
|
||||||
|
# soft wait on the proc to terminate
|
||||||
|
with trio.move_on_after(0.5):
|
||||||
|
await proc.wait()
|
||||||
|
raise
|
||||||
|
|
||||||
|
# a sub-proc ref **must** exist now
|
||||||
|
assert proc
|
||||||
|
|
||||||
|
portal = Portal(chan)
|
||||||
|
actor_nursery._children[subactor.uid] = (
|
||||||
|
subactor,
|
||||||
|
proc,
|
||||||
|
portal,
|
||||||
|
)
|
||||||
|
|
||||||
|
# send a "spawning specification" which configures the
|
||||||
|
# initial runtime state of the child.
|
||||||
|
sspec = msgtypes.SpawnSpec(
|
||||||
|
_parent_main_data=subactor._parent_main_data,
|
||||||
|
enable_modules=subactor.enable_modules,
|
||||||
|
reg_addrs=subactor.reg_addrs,
|
||||||
|
bind_addrs=bind_addrs,
|
||||||
|
_runtime_vars=_runtime_vars,
|
||||||
|
)
|
||||||
|
log.runtime(
|
||||||
|
f'Sending spawn spec to child\n'
|
||||||
|
f'{{}}=> {chan.aid.reprol()!r}\n'
|
||||||
|
f'\n'
|
||||||
|
f'{pretty_struct.pformat(sspec)}\n'
|
||||||
|
)
|
||||||
|
await chan.send(sspec)
|
||||||
|
|
||||||
|
# track subactor in current nursery
|
||||||
|
curr_actor: Actor = current_actor()
|
||||||
|
curr_actor._actoruid2nursery[subactor.uid] = actor_nursery
|
||||||
|
|
||||||
|
# resume caller at next checkpoint now that child is up
|
||||||
|
task_status.started(portal)
|
||||||
|
|
||||||
|
# wait for ActorNursery.wait() to be called
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
await actor_nursery._join_procs.wait()
|
||||||
|
|
||||||
|
async with trio.open_nursery() as nursery:
|
||||||
|
if portal in actor_nursery._cancel_after_result_on_exit:
|
||||||
|
nursery.start_soon(
|
||||||
|
cancel_on_completion,
|
||||||
|
portal,
|
||||||
|
subactor,
|
||||||
|
errors
|
||||||
|
)
|
||||||
|
|
||||||
|
# This is a "soft" (cancellable) join/reap which
|
||||||
|
# will remote cancel the actor on a ``trio.Cancelled``
|
||||||
|
# condition.
|
||||||
|
await soft_kill(
|
||||||
|
proc,
|
||||||
|
trio.Process.wait, # XXX, uses `pidfd_open()` below.
|
||||||
|
portal
|
||||||
|
)
|
||||||
|
|
||||||
|
# cancel result waiter that may have been spawned in
|
||||||
|
# tandem if not done already
|
||||||
|
log.cancel(
|
||||||
|
'Cancelling portal result reaper task\n'
|
||||||
|
f'c)> {subactor.aid.reprol()!r}\n'
|
||||||
|
)
|
||||||
|
nursery.cancel_scope.cancel()
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# XXX NOTE XXX: The "hard" reap since no actor zombies are
|
||||||
|
# allowed! Do this **after** cancellation/teardown to avoid
|
||||||
|
# killing the process too early.
|
||||||
|
if proc:
|
||||||
|
reap_repr: str = _pformat.nest_from_op(
|
||||||
|
input_op='>x)',
|
||||||
|
text=subactor.pformat(),
|
||||||
|
)
|
||||||
|
log.cancel(
|
||||||
|
f'Hard reap sequence starting for subactor\n'
|
||||||
|
f'{reap_repr}'
|
||||||
|
)
|
||||||
|
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
# don't clobber an ongoing pdb
|
||||||
|
if cancelled_during_spawn:
|
||||||
|
# Try again to avoid TTY clobbering.
|
||||||
|
async with debug.acquire_debug_lock(subactor.uid):
|
||||||
|
with trio.move_on_after(0.5):
|
||||||
|
await proc.wait()
|
||||||
|
|
||||||
|
await debug.maybe_wait_for_debugger(
|
||||||
|
child_in_debug=_runtime_vars.get(
|
||||||
|
'_debug_mode', False
|
||||||
|
),
|
||||||
|
header_msg=(
|
||||||
|
'Delaying subproc reaper while debugger locked..\n'
|
||||||
|
),
|
||||||
|
|
||||||
|
# TODO: need a diff value then default?
|
||||||
|
# poll_steps=9999999,
|
||||||
|
)
|
||||||
|
# TODO: solve the following issue where we need
|
||||||
|
# to do a similar wait like this but in an
|
||||||
|
# "intermediary" parent actor that itself isn't
|
||||||
|
# in debug but has a child that is, and we need
|
||||||
|
# to hold off on relaying SIGINT until that child
|
||||||
|
# is complete.
|
||||||
|
# https://github.com/goodboy/tractor/issues/320
|
||||||
|
# -[ ] we need to handle non-root parent-actors specially
|
||||||
|
# by somehow determining if a child is in debug and then
|
||||||
|
# avoiding cancel/kill of said child by this
|
||||||
|
# (intermediary) parent until such a time as the root says
|
||||||
|
# the pdb lock is released and we are good to tear down
|
||||||
|
# (our children)..
|
||||||
|
#
|
||||||
|
# -[ ] so maybe something like this where we try to
|
||||||
|
# acquire the lock and get notified of who has it,
|
||||||
|
# check that uid against our known children?
|
||||||
|
# this_uid: tuple[str, str] = current_actor().uid
|
||||||
|
# await debug.acquire_debug_lock(this_uid)
|
||||||
|
|
||||||
|
if proc.poll() is None:
|
||||||
|
log.cancel(f"Attempting to hard kill {proc}")
|
||||||
|
await hard_kill(proc)
|
||||||
|
|
||||||
|
log.debug(f"Joined {proc}")
|
||||||
|
else:
|
||||||
|
log.warning('Nursery cancelled before sub-proc started')
|
||||||
|
|
||||||
|
if not cancelled_during_spawn:
|
||||||
|
# pop child entry to indicate we no longer managing this
|
||||||
|
# subactor
|
||||||
|
actor_nursery._children.pop(subactor.uid)
|
||||||
|
|
||||||
|
|
||||||
|
async def mp_proc(
|
||||||
|
name: str,
|
||||||
|
actor_nursery: ActorNursery, # type: ignore # noqa
|
||||||
|
subactor: Actor,
|
||||||
|
errors: dict[tuple[str, str], Exception],
|
||||||
|
# passed through to actor main
|
||||||
|
bind_addrs: list[UnwrappedAddress],
|
||||||
|
parent_addr: UnwrappedAddress,
|
||||||
|
_runtime_vars: dict[str, Any], # serialized and sent to _child
|
||||||
|
*,
|
||||||
|
infect_asyncio: bool = False,
|
||||||
|
task_status: TaskStatus[Portal] = trio.TASK_STATUS_IGNORED,
|
||||||
|
proc_kwargs: dict[str, any] = {}
|
||||||
|
|
||||||
|
) -> None:
|
||||||
|
|
||||||
|
# uggh zone
|
||||||
|
try:
|
||||||
|
from multiprocessing import semaphore_tracker # type: ignore
|
||||||
|
resource_tracker = semaphore_tracker
|
||||||
|
resource_tracker._resource_tracker = resource_tracker._semaphore_tracker # noqa
|
||||||
|
except ImportError:
|
||||||
|
# 3.8 introduces a more general version that also tracks shared mems
|
||||||
|
from multiprocessing import resource_tracker # type: ignore
|
||||||
|
|
||||||
|
assert _ctx
|
||||||
|
start_method = _ctx.get_start_method()
|
||||||
|
if start_method == 'forkserver':
|
||||||
|
|
||||||
|
from multiprocessing import forkserver # type: ignore
|
||||||
|
# XXX do our hackery on the stdlib to avoid multiple
|
||||||
|
# forkservers (one at each subproc layer).
|
||||||
|
fs = forkserver._forkserver
|
||||||
|
curr_actor = current_actor()
|
||||||
|
if is_main_process() and not curr_actor._forkserver_info:
|
||||||
|
# if we're the "main" process start the forkserver
|
||||||
|
# only once and pass its ipc info to downstream
|
||||||
|
# children
|
||||||
|
# forkserver.set_forkserver_preload(enable_modules)
|
||||||
|
forkserver.ensure_running()
|
||||||
|
fs_info = (
|
||||||
|
fs._forkserver_address, # type: ignore # noqa
|
||||||
|
fs._forkserver_alive_fd, # type: ignore # noqa
|
||||||
|
getattr(fs, '_forkserver_pid', None),
|
||||||
|
getattr(
|
||||||
|
resource_tracker._resource_tracker, '_pid', None),
|
||||||
|
resource_tracker._resource_tracker._fd,
|
||||||
|
)
|
||||||
|
else: # request to forkerserver to fork a new child
|
||||||
|
assert curr_actor._forkserver_info
|
||||||
|
fs_info = (
|
||||||
|
fs._forkserver_address, # type: ignore # noqa
|
||||||
|
fs._forkserver_alive_fd, # type: ignore # noqa
|
||||||
|
fs._forkserver_pid, # type: ignore # noqa
|
||||||
|
resource_tracker._resource_tracker._pid,
|
||||||
|
resource_tracker._resource_tracker._fd,
|
||||||
|
) = curr_actor._forkserver_info
|
||||||
|
else:
|
||||||
|
# spawn method
|
||||||
|
fs_info = (None, None, None, None, None)
|
||||||
|
|
||||||
|
proc: mp.Process = _ctx.Process( # type: ignore
|
||||||
|
target=_mp_main,
|
||||||
|
args=(
|
||||||
|
subactor,
|
||||||
|
bind_addrs,
|
||||||
|
fs_info,
|
||||||
|
_spawn_method,
|
||||||
|
parent_addr,
|
||||||
|
infect_asyncio,
|
||||||
|
),
|
||||||
|
# daemon=True,
|
||||||
|
name=name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# `multiprocessing` only (since no async interface):
|
||||||
|
# register the process before start in case we get a cancel
|
||||||
|
# request before the actor has fully spawned - then we can wait
|
||||||
|
# for it to fully come up before sending a cancel request
|
||||||
|
actor_nursery._children[subactor.uid] = (subactor, proc, None)
|
||||||
|
|
||||||
|
proc.start()
|
||||||
|
if not proc.is_alive():
|
||||||
|
raise ActorFailure("Couldn't start sub-actor?")
|
||||||
|
|
||||||
|
log.runtime(f"Started {proc}")
|
||||||
|
|
||||||
|
ipc_server: _server.Server = actor_nursery._actor.ipc_server
|
||||||
|
try:
|
||||||
|
# wait for actor to spawn and connect back to us
|
||||||
|
# channel should have handshake completed by the
|
||||||
|
# local actor by the time we get a ref to it
|
||||||
|
event, chan = await ipc_server.wait_for_peer(
|
||||||
|
subactor.uid,
|
||||||
|
)
|
||||||
|
|
||||||
|
# XXX: monkey patch poll API to match the ``subprocess`` API..
|
||||||
|
# not sure why they don't expose this but kk.
|
||||||
|
proc.poll = lambda: proc.exitcode # type: ignore
|
||||||
|
|
||||||
|
# except:
|
||||||
|
# TODO: in the case we were cancelled before the sub-proc
|
||||||
|
# registered itself back we must be sure to try and clean
|
||||||
|
# any process we may have started.
|
||||||
|
|
||||||
|
portal = Portal(chan)
|
||||||
|
actor_nursery._children[subactor.uid] = (subactor, proc, portal)
|
||||||
|
|
||||||
|
# unblock parent task
|
||||||
|
task_status.started(portal)
|
||||||
|
|
||||||
|
# wait for ``ActorNursery`` block to signal that
|
||||||
|
# subprocesses can be waited upon.
|
||||||
|
# This is required to ensure synchronization
|
||||||
|
# with user code that may want to manually await results
|
||||||
|
# from nursery spawned sub-actors. We don't want the
|
||||||
|
# containing nurseries here to collect results or error
|
||||||
|
# while user code is still doing it's thing. Only after the
|
||||||
|
# nursery block closes do we allow subactor results to be
|
||||||
|
# awaited and reported upwards to the supervisor.
|
||||||
|
with trio.CancelScope(shield=True):
|
||||||
|
await actor_nursery._join_procs.wait()
|
||||||
|
|
||||||
|
async with trio.open_nursery() as nursery:
|
||||||
|
if portal in actor_nursery._cancel_after_result_on_exit:
|
||||||
|
nursery.start_soon(
|
||||||
|
cancel_on_completion,
|
||||||
|
portal,
|
||||||
|
subactor,
|
||||||
|
errors
|
||||||
|
)
|
||||||
|
|
||||||
|
# This is a "soft" (cancellable) join/reap which
|
||||||
|
# will remote cancel the actor on a ``trio.Cancelled``
|
||||||
|
# condition.
|
||||||
|
await soft_kill(
|
||||||
|
proc,
|
||||||
|
proc_waiter,
|
||||||
|
portal
|
||||||
|
)
|
||||||
|
|
||||||
|
# cancel result waiter that may have been spawned in
|
||||||
|
# tandem if not done already
|
||||||
|
log.warning(
|
||||||
|
"Cancelling existing result waiter task for "
|
||||||
|
f"{subactor.uid}")
|
||||||
|
nursery.cancel_scope.cancel()
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# hard reap sequence
|
||||||
|
if proc.is_alive():
|
||||||
|
log.cancel(f"Attempting to hard kill {proc}")
|
||||||
|
with trio.move_on_after(0.1) as cs:
|
||||||
|
cs.shield = True
|
||||||
|
await proc_waiter(proc)
|
||||||
|
|
||||||
|
if cs.cancelled_caught:
|
||||||
|
proc.terminate()
|
||||||
|
|
||||||
|
proc.join()
|
||||||
|
log.debug(f"Joined {proc}")
|
||||||
|
|
||||||
|
# pop child entry to indicate we are no longer managing subactor
|
||||||
|
actor_nursery._children.pop(subactor.uid)
|
||||||
|
|
||||||
|
# TODO: prolly report to ``mypy`` how this causes all sorts of
|
||||||
|
# false errors..
|
||||||
|
# subactor, proc, portal = actor_nursery._children.pop(subactor.uid)
|
||||||
|
|
||||||
|
|
||||||
|
# proc spawning backend target map
|
||||||
|
_methods: dict[SpawnMethodKey, Callable] = {
|
||||||
|
'trio': trio_proc,
|
||||||
|
'mp_spawn': mp_proc,
|
||||||
|
'mp_forkserver': mp_proc,
|
||||||
|
}
|
||||||
|
|
@ -22,25 +22,19 @@ from __future__ import annotations
|
||||||
from contextvars import (
|
from contextvars import (
|
||||||
ContextVar,
|
ContextVar,
|
||||||
)
|
)
|
||||||
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
Callable,
|
|
||||||
Literal,
|
Literal,
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
)
|
)
|
||||||
|
|
||||||
import platformdirs
|
|
||||||
from trio.lowlevel import current_task
|
from trio.lowlevel import current_task
|
||||||
|
|
||||||
from msgspec import (
|
|
||||||
field,
|
|
||||||
Struct,
|
|
||||||
)
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from ._runtime import Actor
|
from ._runtime import Actor
|
||||||
from .._context import Context
|
from ._context import Context
|
||||||
|
|
||||||
|
|
||||||
# default IPC transport protocol settings
|
# default IPC transport protocol settings
|
||||||
|
|
@ -53,70 +47,9 @@ _def_tpt_proto: TransportProtocolKey = 'tcp'
|
||||||
_current_actor: Actor|None = None # type: ignore # noqa
|
_current_actor: Actor|None = None # type: ignore # noqa
|
||||||
_last_actor_terminated: Actor|None = None
|
_last_actor_terminated: Actor|None = None
|
||||||
|
|
||||||
|
|
||||||
# TODO: mk this a `msgspec.Struct`!
|
# TODO: mk this a `msgspec.Struct`!
|
||||||
# -[x] type out all fields obvi!
|
# -[ ] type out all fields obvi!
|
||||||
# -[ ] (eventually) mk wire-ready for monitoring?
|
# -[ ] (eventually) mk wire-ready for monitoring?
|
||||||
class RuntimeVars(Struct):
|
|
||||||
'''
|
|
||||||
Actor-(and thus process)-global runtime state.
|
|
||||||
|
|
||||||
This struct is relayed from parent to child during sub-actor
|
|
||||||
spawning and is a singleton instance per process.
|
|
||||||
|
|
||||||
Generally contains,
|
|
||||||
- root-actor indicator.
|
|
||||||
- comms-info: addrs for both (public) process/service-discovery
|
|
||||||
and in-tree contact with other actors.
|
|
||||||
- transport-layer IPC protocol server(s) settings.
|
|
||||||
- debug-mode settings for enabling sync breakpointing and any
|
|
||||||
surrounding REPL-fixture hooking.
|
|
||||||
- infected-`asyncio` via guest-mode toggle(s)/cohfig.
|
|
||||||
|
|
||||||
'''
|
|
||||||
_is_root: bool = False # bool
|
|
||||||
_root_mailbox: tuple[str, str|int] = (None, None) # tuple[str|None, str|None]
|
|
||||||
_root_addrs: list[
|
|
||||||
tuple[str, str|int],
|
|
||||||
] = [] # tuple[str|None, str|None]
|
|
||||||
|
|
||||||
# parent->chld ipc protocol caps
|
|
||||||
_enable_tpts: list[TransportProtocolKey] = field(
|
|
||||||
default_factory=lambda: [_def_tpt_proto],
|
|
||||||
)
|
|
||||||
|
|
||||||
# registrar info
|
|
||||||
_registry_addrs: list[tuple] = []
|
|
||||||
|
|
||||||
# `debug_mode: bool` settings
|
|
||||||
_debug_mode: bool = False # bool
|
|
||||||
repl_fixture: bool|Callable = False # |AbstractContextManager[bool]
|
|
||||||
# for `tractor.pause_from_sync()` & `breakpoint()` support
|
|
||||||
use_greenback: bool = False
|
|
||||||
|
|
||||||
# infected-`asyncio`-mode: `trio` running as guest.
|
|
||||||
_is_infected_aio: bool = False
|
|
||||||
|
|
||||||
def __setattr__(
|
|
||||||
self,
|
|
||||||
key,
|
|
||||||
val,
|
|
||||||
) -> None:
|
|
||||||
breakpoint()
|
|
||||||
super().__setattr__(key, val)
|
|
||||||
|
|
||||||
def update(
|
|
||||||
self,
|
|
||||||
from_dict: dict|Struct,
|
|
||||||
) -> None:
|
|
||||||
for attr, val in from_dict.items():
|
|
||||||
setattr(
|
|
||||||
self,
|
|
||||||
attr,
|
|
||||||
val,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_runtime_vars: dict[str, Any] = {
|
_runtime_vars: dict[str, Any] = {
|
||||||
# root of actor-process tree info
|
# root of actor-process tree info
|
||||||
'_is_root': False, # bool
|
'_is_root': False, # bool
|
||||||
|
|
@ -140,23 +73,6 @@ _runtime_vars: dict[str, Any] = {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
def get_runtime_vars(
|
|
||||||
as_dict: bool = True,
|
|
||||||
) -> dict:
|
|
||||||
'''
|
|
||||||
Deliver a **copy** of the current `Actor`'s "runtime variables".
|
|
||||||
|
|
||||||
By default, for historical impl reasons, this delivers the `dict`
|
|
||||||
form, but the `RuntimeVars` struct should be utilized as possible
|
|
||||||
for future calls.
|
|
||||||
|
|
||||||
'''
|
|
||||||
if as_dict:
|
|
||||||
return dict(_runtime_vars)
|
|
||||||
|
|
||||||
return RuntimeVars(**_runtime_vars)
|
|
||||||
|
|
||||||
|
|
||||||
def last_actor() -> Actor|None:
|
def last_actor() -> Actor|None:
|
||||||
'''
|
'''
|
||||||
Try to return last active `Actor` singleton
|
Try to return last active `Actor` singleton
|
||||||
|
|
@ -182,13 +98,13 @@ def current_actor(
|
||||||
_current_actor is None
|
_current_actor is None
|
||||||
):
|
):
|
||||||
msg: str = 'No local actor has been initialized yet?\n'
|
msg: str = 'No local actor has been initialized yet?\n'
|
||||||
from .._exceptions import NoRuntime
|
from ._exceptions import NoRuntime
|
||||||
|
|
||||||
if last := last_actor():
|
if last := last_actor():
|
||||||
msg += (
|
msg += (
|
||||||
f'Apparently the lact active actor was\n'
|
f'Apparently the lact active actor was\n'
|
||||||
f'|_{last}\n'
|
f'|_{last}\n'
|
||||||
f'|_{last.aid.uid}\n'
|
f'|_{last.uid}\n'
|
||||||
)
|
)
|
||||||
# no actor runtime has (as of yet) ever been started for
|
# no actor runtime has (as of yet) ever been started for
|
||||||
# this process.
|
# this process.
|
||||||
|
|
@ -248,7 +164,7 @@ def current_ipc_ctx(
|
||||||
not ctx
|
not ctx
|
||||||
and error_on_not_set
|
and error_on_not_set
|
||||||
):
|
):
|
||||||
from .._exceptions import InternalError
|
from ._exceptions import InternalError
|
||||||
raise InternalError(
|
raise InternalError(
|
||||||
'No IPC context has been allocated for this task yet?\n'
|
'No IPC context has been allocated for this task yet?\n'
|
||||||
f'|_{current_task()}\n'
|
f'|_{current_task()}\n'
|
||||||
|
|
@ -256,56 +172,23 @@ def current_ipc_ctx(
|
||||||
return ctx
|
return ctx
|
||||||
|
|
||||||
|
|
||||||
|
# std ODE (mutable) app state location
|
||||||
|
_rtdir: Path = Path(os.environ['XDG_RUNTIME_DIR'])
|
||||||
|
|
||||||
|
|
||||||
def get_rt_dir(
|
def get_rt_dir(
|
||||||
subdir: str|Path|None = None,
|
subdir: str = 'tractor'
|
||||||
appname: str = 'tractor',
|
|
||||||
) -> Path:
|
) -> Path:
|
||||||
'''
|
'''
|
||||||
Return the user "runtime dir", the file-sys location where most
|
Return the user "runtime dir" where most userspace apps stick
|
||||||
userspace apps stick their IPC and cache related system
|
their IPC and cache related system util-files; we take hold
|
||||||
util-files.
|
of a `'XDG_RUNTIME_DIR'/tractor/` subdir by default.
|
||||||
|
|
||||||
On linux we use a `${XDG_RUNTIME_DIR}/tractor/` subdir by
|
|
||||||
default, but equivalents are mapped for each platform using
|
|
||||||
the lovely `platformdirs` lib.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
rt_dir: Path = Path(
|
rtdir: Path = _rtdir / subdir
|
||||||
platformdirs.user_runtime_dir(
|
if not rtdir.is_dir():
|
||||||
appname=appname,
|
rtdir.mkdir()
|
||||||
),
|
return rtdir
|
||||||
)
|
|
||||||
|
|
||||||
# Normalize and validate that `subdir` is a relative path
|
|
||||||
# without any parent-directory ("..") components, to prevent
|
|
||||||
# escaping the runtime directory.
|
|
||||||
if subdir:
|
|
||||||
subdir_path = (
|
|
||||||
subdir
|
|
||||||
if isinstance(subdir, Path)
|
|
||||||
else Path(subdir)
|
|
||||||
)
|
|
||||||
if subdir_path.is_absolute():
|
|
||||||
raise ValueError(
|
|
||||||
f'`subdir` must be a relative path!\n'
|
|
||||||
f'{subdir!r}\n'
|
|
||||||
)
|
|
||||||
if any(part == '..' for part in subdir_path.parts):
|
|
||||||
raise ValueError(
|
|
||||||
"`subdir` must not contain '..' components!\n"
|
|
||||||
f'{subdir!r}\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
rt_dir: Path = rt_dir / subdir_path
|
|
||||||
|
|
||||||
if not rt_dir.is_dir():
|
|
||||||
rt_dir.mkdir(
|
|
||||||
parents=True,
|
|
||||||
exist_ok=True, # avoid `FileExistsError` from conc calls
|
|
||||||
)
|
|
||||||
|
|
||||||
return rt_dir
|
|
||||||
|
|
||||||
|
|
||||||
def current_ipc_protos() -> list[str]:
|
def current_ipc_protos() -> list[str]:
|
||||||
|
|
@ -38,7 +38,6 @@ import trio
|
||||||
from ._exceptions import (
|
from ._exceptions import (
|
||||||
ContextCancelled,
|
ContextCancelled,
|
||||||
RemoteActorError,
|
RemoteActorError,
|
||||||
TransportClosed,
|
|
||||||
)
|
)
|
||||||
from .log import get_logger
|
from .log import get_logger
|
||||||
from .trionics import (
|
from .trionics import (
|
||||||
|
|
@ -55,12 +54,12 @@ from tractor.msg import (
|
||||||
)
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .runtime._runtime import Actor
|
from ._runtime import Actor
|
||||||
from ._context import Context
|
from ._context import Context
|
||||||
from .ipc import Channel
|
from .ipc import Channel
|
||||||
|
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# TODO: the list
|
# TODO: the list
|
||||||
|
|
@ -410,8 +409,10 @@ class MsgStream(trio.abc.Channel):
|
||||||
# it).
|
# it).
|
||||||
with trio.CancelScope(shield=True):
|
with trio.CancelScope(shield=True):
|
||||||
await self._ctx.send_stop()
|
await self._ctx.send_stop()
|
||||||
|
|
||||||
except (
|
except (
|
||||||
TransportClosed,
|
trio.BrokenResourceError,
|
||||||
|
trio.ClosedResourceError
|
||||||
) as re:
|
) as re:
|
||||||
# the underlying channel may already have been pulled
|
# the underlying channel may already have been pulled
|
||||||
# in which case our stop message is meaningless since
|
# in which case our stop message is meaningless since
|
||||||
|
|
@ -592,8 +593,9 @@ class MsgStream(trio.abc.Channel):
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
except (
|
except (
|
||||||
|
trio.ClosedResourceError,
|
||||||
|
trio.BrokenResourceError,
|
||||||
BrokenPipeError,
|
BrokenPipeError,
|
||||||
TransportClosed,
|
|
||||||
) as _trans_err:
|
) as _trans_err:
|
||||||
trans_err = _trans_err
|
trans_err = _trans_err
|
||||||
if (
|
if (
|
||||||
|
|
|
||||||
|
|
@ -30,39 +30,39 @@ import warnings
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
|
|
||||||
from ..devx import (
|
from .devx import (
|
||||||
debug,
|
debug,
|
||||||
pformat as _pformat,
|
pformat as _pformat,
|
||||||
)
|
)
|
||||||
from ..discovery._addr import (
|
from ._addr import (
|
||||||
UnwrappedAddress,
|
UnwrappedAddress,
|
||||||
mk_uuid,
|
mk_uuid,
|
||||||
)
|
)
|
||||||
from ._state import current_actor, is_main_process
|
from ._state import current_actor, is_main_process
|
||||||
from ..log import get_logger, get_loglevel
|
from .log import get_logger, get_loglevel
|
||||||
from ._runtime import Actor
|
from ._runtime import Actor
|
||||||
from ._portal import Portal
|
from ._portal import Portal
|
||||||
from ..trionics import (
|
from .trionics import (
|
||||||
is_multi_cancelled,
|
is_multi_cancelled,
|
||||||
collapse_eg,
|
collapse_eg,
|
||||||
)
|
)
|
||||||
from .._exceptions import (
|
from ._exceptions import (
|
||||||
ContextCancelled,
|
ContextCancelled,
|
||||||
)
|
)
|
||||||
from .._root import (
|
from ._root import (
|
||||||
open_root_actor,
|
open_root_actor,
|
||||||
)
|
)
|
||||||
from . import _state
|
from . import _state
|
||||||
from ..spawn import _spawn
|
from . import _spawn
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
import multiprocessing as mp
|
import multiprocessing as mp
|
||||||
# from ..ipc._server import IPCServer
|
# from .ipc._server import IPCServer
|
||||||
from ..ipc import IPCServer
|
from .ipc import IPCServer
|
||||||
|
|
||||||
|
|
||||||
log = get_logger()
|
log = get_logger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ActorNursery:
|
class ActorNursery:
|
||||||
|
|
@ -194,26 +194,18 @@ class ActorNursery:
|
||||||
loglevel: str|None = None, # set log level per subactor
|
loglevel: str|None = None, # set log level per subactor
|
||||||
debug_mode: bool|None = None,
|
debug_mode: bool|None = None,
|
||||||
infect_asyncio: bool = False,
|
infect_asyncio: bool = False,
|
||||||
inherit_parent_main: bool = True,
|
|
||||||
|
|
||||||
# TODO: ideally we can rm this once we no longer have
|
# TODO: ideally we can rm this once we no longer have
|
||||||
# a `._ria_nursery` since the dependent APIs have been
|
# a `._ria_nursery` since the dependent APIs have been
|
||||||
# removed!
|
# removed!
|
||||||
nursery: trio.Nursery|None = None,
|
nursery: trio.Nursery|None = None,
|
||||||
proc_kwargs: dict[str, typing.Any] | None = None,
|
proc_kwargs: dict[str, any] = {}
|
||||||
|
|
||||||
) -> Portal:
|
) -> Portal:
|
||||||
'''
|
'''
|
||||||
Start a (daemon) actor: an process that has no designated
|
Start a (daemon) actor: an process that has no designated
|
||||||
"main task" besides the runtime.
|
"main task" besides the runtime.
|
||||||
|
|
||||||
Pass ``inherit_parent_main=False`` to keep this child on its
|
|
||||||
own bootstrap module for the trio spawn backend instead of
|
|
||||||
applying the parent ``__main__`` re-exec fixup during startup.
|
|
||||||
This does not affect ``multiprocessing`` ``spawn`` or
|
|
||||||
``forkserver`` which reconstruct the parent's ``__main__`` as
|
|
||||||
part of their normal stdlib bootstrap.
|
|
||||||
|
|
||||||
'''
|
'''
|
||||||
__runtimeframe__: int = 1 # noqa
|
__runtimeframe__: int = 1 # noqa
|
||||||
loglevel: str = (
|
loglevel: str = (
|
||||||
|
|
@ -232,8 +224,7 @@ class ActorNursery:
|
||||||
_rtv['_debug_mode'] = debug_mode
|
_rtv['_debug_mode'] = debug_mode
|
||||||
self._at_least_one_child_in_debug = True
|
self._at_least_one_child_in_debug = True
|
||||||
|
|
||||||
enable_modules = list(enable_modules or [])
|
enable_modules = enable_modules or []
|
||||||
proc_kwargs = dict(proc_kwargs or {})
|
|
||||||
|
|
||||||
if rpc_module_paths:
|
if rpc_module_paths:
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
|
|
@ -251,7 +242,6 @@ class ActorNursery:
|
||||||
# modules allowed to invoked funcs from
|
# modules allowed to invoked funcs from
|
||||||
enable_modules=enable_modules,
|
enable_modules=enable_modules,
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
inherit_parent_main=inherit_parent_main,
|
|
||||||
|
|
||||||
# verbatim relay this actor's registrar addresses
|
# verbatim relay this actor's registrar addresses
|
||||||
registry_addrs=current_actor().registry_addrs,
|
registry_addrs=current_actor().registry_addrs,
|
||||||
|
|
@ -299,8 +289,7 @@ class ActorNursery:
|
||||||
enable_modules: list[str] | None = None,
|
enable_modules: list[str] | None = None,
|
||||||
loglevel: str | None = None, # set log level per subactor
|
loglevel: str | None = None, # set log level per subactor
|
||||||
infect_asyncio: bool = False,
|
infect_asyncio: bool = False,
|
||||||
inherit_parent_main: bool = True,
|
proc_kwargs: dict[str, any] = {},
|
||||||
proc_kwargs: dict[str, typing.Any] | None = None,
|
|
||||||
|
|
||||||
**kwargs, # explicit args to ``fn``
|
**kwargs, # explicit args to ``fn``
|
||||||
|
|
||||||
|
|
@ -321,7 +310,6 @@ class ActorNursery:
|
||||||
# use the explicit function name if not provided
|
# use the explicit function name if not provided
|
||||||
name = fn.__name__
|
name = fn.__name__
|
||||||
|
|
||||||
proc_kwargs = dict(proc_kwargs or {})
|
|
||||||
portal: Portal = await self.start_actor(
|
portal: Portal = await self.start_actor(
|
||||||
name,
|
name,
|
||||||
enable_modules=[mod_path] + (
|
enable_modules=[mod_path] + (
|
||||||
|
|
@ -332,7 +320,6 @@ class ActorNursery:
|
||||||
# use the run_in_actor nursery
|
# use the run_in_actor nursery
|
||||||
nursery=self._ria_nursery,
|
nursery=self._ria_nursery,
|
||||||
infect_asyncio=infect_asyncio,
|
infect_asyncio=infect_asyncio,
|
||||||
inherit_parent_main=inherit_parent_main,
|
|
||||||
proc_kwargs=proc_kwargs
|
proc_kwargs=proc_kwargs
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
@ -404,17 +391,15 @@ class ActorNursery:
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if portal is None: # actor hasn't fully spawned yet
|
if portal is None: # actor hasn't fully spawned yet
|
||||||
event: trio.Event = server._peer_connected[
|
event: trio.Event = server._peer_connected[subactor.uid]
|
||||||
subactor.aid.uid
|
|
||||||
]
|
|
||||||
log.warning(
|
log.warning(
|
||||||
f"{subactor.aid.uid} never 't finished spawning?"
|
f"{subactor.uid} never 't finished spawning?"
|
||||||
)
|
)
|
||||||
|
|
||||||
await event.wait()
|
await event.wait()
|
||||||
|
|
||||||
# channel/portal should now be up
|
# channel/portal should now be up
|
||||||
_, _, portal = children[subactor.aid.uid]
|
_, _, portal = children[subactor.uid]
|
||||||
|
|
||||||
# XXX should be impossible to get here
|
# XXX should be impossible to get here
|
||||||
# unless method was called from within
|
# unless method was called from within
|
||||||
|
|
@ -422,7 +407,7 @@ class ActorNursery:
|
||||||
if portal is None:
|
if portal is None:
|
||||||
# cancelled while waiting on the event
|
# cancelled while waiting on the event
|
||||||
# to arrive
|
# to arrive
|
||||||
chan = server._peers[subactor.aid.uid][-1]
|
chan = server._peers[subactor.uid][-1]
|
||||||
if chan:
|
if chan:
|
||||||
portal = Portal(chan)
|
portal = Portal(chan)
|
||||||
else: # there's no other choice left
|
else: # there's no other choice left
|
||||||
|
|
@ -521,7 +506,7 @@ async def _open_and_supervise_one_cancels_all_nursery(
|
||||||
|
|
||||||
except BaseException as _inner_err:
|
except BaseException as _inner_err:
|
||||||
inner_err = _inner_err
|
inner_err = _inner_err
|
||||||
errors[actor.aid.uid] = inner_err
|
errors[actor.uid] = inner_err
|
||||||
|
|
||||||
# If we error in the root but the debugger is
|
# If we error in the root but the debugger is
|
||||||
# engaged we don't want to prematurely kill (and
|
# engaged we don't want to prematurely kill (and
|
||||||
|
|
@ -554,7 +539,7 @@ async def _open_and_supervise_one_cancels_all_nursery(
|
||||||
log.cancel(
|
log.cancel(
|
||||||
f'Actor-nursery cancelled by {etype}\n\n'
|
f'Actor-nursery cancelled by {etype}\n\n'
|
||||||
|
|
||||||
f'{current_actor().aid.uid}\n'
|
f'{current_actor().uid}\n'
|
||||||
f' |_{an}\n\n'
|
f' |_{an}\n\n'
|
||||||
|
|
||||||
# TODO: show tb str?
|
# TODO: show tb str?
|
||||||
|
|
@ -645,7 +630,7 @@ async def _open_and_supervise_one_cancels_all_nursery(
|
||||||
|
|
||||||
# show frame on any (likely) internal error
|
# show frame on any (likely) internal error
|
||||||
if (
|
if (
|
||||||
not an.cancel_called
|
not an.cancelled
|
||||||
and an._scope_error
|
and an._scope_error
|
||||||
):
|
):
|
||||||
__tracebackhide__: bool = False
|
__tracebackhide__: bool = False
|
||||||
|
|
@ -741,7 +726,7 @@ async def open_nursery(
|
||||||
if (
|
if (
|
||||||
an
|
an
|
||||||
and
|
and
|
||||||
not an.cancel_called
|
not an.cancelled
|
||||||
and
|
and
|
||||||
an._scope_error
|
an._scope_error
|
||||||
):
|
):
|
||||||
|
|
@ -26,7 +26,9 @@ import random
|
||||||
from typing import (
|
from typing import (
|
||||||
Type,
|
Type,
|
||||||
)
|
)
|
||||||
from tractor.discovery import _addr
|
from tractor import (
|
||||||
|
_addr,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_rando_addr(
|
def get_rando_addr(
|
||||||
|
|
@ -59,11 +61,7 @@ def get_rando_addr(
|
||||||
# NOTE, file-name uniqueness (no-collisions) will be based on
|
# NOTE, file-name uniqueness (no-collisions) will be based on
|
||||||
# the runtime-directory and root (pytest-proc's) pid.
|
# the runtime-directory and root (pytest-proc's) pid.
|
||||||
case 'uds':
|
case 'uds':
|
||||||
from tractor.ipc._uds import UDSAddress
|
testrun_reg_addr = addr_type.get_random().unwrap()
|
||||||
addr: UDSAddress = addr_type.get_random()
|
|
||||||
assert addr.is_valid
|
|
||||||
assert addr.sockpath.resolve()
|
|
||||||
testrun_reg_addr = addr.unwrap()
|
|
||||||
|
|
||||||
# XXX, as sanity it should never the same as the default for the
|
# XXX, as sanity it should never the same as the default for the
|
||||||
# host-singleton registry actor.
|
# host-singleton registry actor.
|
||||||
|
|
|
||||||
|
|
@ -25,23 +25,13 @@ from functools import (
|
||||||
)
|
)
|
||||||
import inspect
|
import inspect
|
||||||
import platform
|
import platform
|
||||||
from typing import (
|
|
||||||
Callable,
|
|
||||||
get_args,
|
|
||||||
)
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
import tractor
|
import tractor
|
||||||
import trio
|
import trio
|
||||||
|
|
||||||
|
|
||||||
def tractor_test(
|
def tractor_test(fn):
|
||||||
wrapped: Callable|None = None,
|
|
||||||
*,
|
|
||||||
# @tractor_test(<deco-params>)
|
|
||||||
timeout: float = 30,
|
|
||||||
hide_tb: bool = True,
|
|
||||||
):
|
|
||||||
'''
|
'''
|
||||||
Decorator for async test fns to decorator-wrap them as "native"
|
Decorator for async test fns to decorator-wrap them as "native"
|
||||||
looking sync funcs runnable by `pytest` and auto invoked with
|
looking sync funcs runnable by `pytest` and auto invoked with
|
||||||
|
|
@ -55,18 +45,8 @@ def tractor_test(
|
||||||
Basic deco use:
|
Basic deco use:
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
@tractor_test(
|
@tractor_test
|
||||||
timeout=10,
|
async def test_whatever():
|
||||||
)
|
|
||||||
async def test_whatever(
|
|
||||||
# fixture param declarations
|
|
||||||
loglevel: str,
|
|
||||||
start_method: str,
|
|
||||||
reg_addr: tuple,
|
|
||||||
tpt_proto: str,
|
|
||||||
debug_mode: bool,
|
|
||||||
):
|
|
||||||
# already inside a root-actor runtime `trio.Task`
|
|
||||||
await ...
|
await ...
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -75,7 +55,7 @@ def tractor_test(
|
||||||
If any of the following fixture are requested by the wrapped test
|
If any of the following fixture are requested by the wrapped test
|
||||||
fn (via normal func-args declaration),
|
fn (via normal func-args declaration),
|
||||||
|
|
||||||
- `reg_addr` (a socket addr tuple where registrar is listening)
|
- `reg_addr` (a socket addr tuple where arbiter is listening)
|
||||||
- `loglevel` (logging level passed to tractor internals)
|
- `loglevel` (logging level passed to tractor internals)
|
||||||
- `start_method` (subprocess spawning backend)
|
- `start_method` (subprocess spawning backend)
|
||||||
|
|
||||||
|
|
@ -87,96 +67,65 @@ def tractor_test(
|
||||||
`tractor.open_root_actor()` funcargs.
|
`tractor.open_root_actor()` funcargs.
|
||||||
|
|
||||||
'''
|
'''
|
||||||
__tracebackhide__: bool = hide_tb
|
@wraps(fn)
|
||||||
|
def wrapper(
|
||||||
|
*args,
|
||||||
|
loglevel=None,
|
||||||
|
reg_addr=None,
|
||||||
|
start_method: str|None = None,
|
||||||
|
debug_mode: bool = False,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
# __tracebackhide__ = True
|
||||||
|
|
||||||
# handle @tractor_test (no parens) vs @tractor_test(timeout=10)
|
# NOTE: inject ant test func declared fixture
|
||||||
if wrapped is None:
|
# names by manually checking!
|
||||||
return partial(
|
if 'reg_addr' in inspect.signature(fn).parameters:
|
||||||
tractor_test,
|
# injects test suite fixture value to test as well
|
||||||
timeout=timeout,
|
# as `run()`
|
||||||
hide_tb=hide_tb,
|
kwargs['reg_addr'] = reg_addr
|
||||||
)
|
|
||||||
|
|
||||||
funcname: str = wrapped.__name__
|
if 'loglevel' in inspect.signature(fn).parameters:
|
||||||
if not inspect.iscoroutinefunction(wrapped):
|
# allows test suites to define a 'loglevel' fixture
|
||||||
raise TypeError(
|
# that activates the internal logging
|
||||||
f'Test-fn {funcname!r} must be an async-function !!'
|
kwargs['loglevel'] = loglevel
|
||||||
)
|
|
||||||
|
|
||||||
# NOTE: we intentionally use `functools.wraps` instead of
|
|
||||||
# `@wrapt.decorator` here bc wrapt's transparent proxy makes
|
|
||||||
# `inspect.iscoroutinefunction(wrapper)` return `True` (it
|
|
||||||
# proxies `__code__` from the wrapped async fn), which causes
|
|
||||||
# pytest to skip the test as an "unhandled coroutine".
|
|
||||||
# `functools.wraps` preserves the signature for fixture
|
|
||||||
# injection (via `__wrapped__`) without leaking the async
|
|
||||||
# nature.
|
|
||||||
@wraps(wrapped)
|
|
||||||
def wrapper(**kwargs):
|
|
||||||
__tracebackhide__: bool = hide_tb
|
|
||||||
|
|
||||||
# NOTE, ensure we inject any test-fn declared fixture
|
|
||||||
# names.
|
|
||||||
for kw in [
|
|
||||||
'reg_addr',
|
|
||||||
'loglevel',
|
|
||||||
'start_method',
|
|
||||||
'debug_mode',
|
|
||||||
'tpt_proto',
|
|
||||||
'timeout',
|
|
||||||
]:
|
|
||||||
if kw in inspect.signature(wrapped).parameters:
|
|
||||||
assert kw in kwargs
|
|
||||||
|
|
||||||
# Extract runtime settings as locals for
|
|
||||||
# `open_root_actor()`; these must NOT leak into
|
|
||||||
# `kwargs` when the test fn doesn't declare them
|
|
||||||
# (the original pre-wrapt code had the same guard).
|
|
||||||
reg_addr = kwargs.get('reg_addr')
|
|
||||||
loglevel = kwargs.get('loglevel')
|
|
||||||
debug_mode = kwargs.get('debug_mode', False)
|
|
||||||
start_method = kwargs.get('start_method')
|
|
||||||
if platform.system() == 'Windows':
|
|
||||||
if start_method is None:
|
if start_method is None:
|
||||||
|
if platform.system() == "Windows":
|
||||||
start_method = 'trio'
|
start_method = 'trio'
|
||||||
elif start_method != 'trio':
|
|
||||||
raise ValueError(
|
|
||||||
'ONLY the `start_method="trio"` is supported on Windows.'
|
|
||||||
)
|
|
||||||
|
|
||||||
# Open a root-actor, passing runtime-settings
|
if 'start_method' in inspect.signature(fn).parameters:
|
||||||
# extracted above as closure locals, then invoke
|
# set of subprocess spawning backends
|
||||||
# the test-fn body as the root-most task.
|
kwargs['start_method'] = start_method
|
||||||
#
|
|
||||||
# NOTE: `kwargs` is forwarded as-is to
|
|
||||||
# `wrapped()` — it only contains what pytest
|
|
||||||
# injected based on the test fn's signature.
|
|
||||||
async def _main(**kwargs):
|
|
||||||
__tracebackhide__: bool = hide_tb
|
|
||||||
|
|
||||||
with trio.fail_after(timeout):
|
if 'debug_mode' in inspect.signature(fn).parameters:
|
||||||
|
# set of subprocess spawning backends
|
||||||
|
kwargs['debug_mode'] = debug_mode
|
||||||
|
|
||||||
|
|
||||||
|
if kwargs:
|
||||||
|
|
||||||
|
# use explicit root actor start
|
||||||
|
async def _main():
|
||||||
async with tractor.open_root_actor(
|
async with tractor.open_root_actor(
|
||||||
registry_addrs=(
|
# **kwargs,
|
||||||
[reg_addr] if reg_addr else None
|
registry_addrs=[reg_addr] if reg_addr else None,
|
||||||
),
|
|
||||||
loglevel=loglevel,
|
loglevel=loglevel,
|
||||||
start_method=start_method,
|
start_method=start_method,
|
||||||
|
|
||||||
# TODO: only enable when pytest is passed
|
# TODO: only enable when pytest is passed --pdb
|
||||||
# --pdb
|
|
||||||
debug_mode=debug_mode,
|
debug_mode=debug_mode,
|
||||||
|
|
||||||
):
|
):
|
||||||
# invoke test-fn body IN THIS task
|
await fn(*args, **kwargs)
|
||||||
await wrapped(**kwargs)
|
|
||||||
|
|
||||||
# invoke runtime via a root task.
|
main = _main
|
||||||
return trio.run(
|
|
||||||
partial(
|
else:
|
||||||
_main,
|
# use implicit root actor start
|
||||||
**kwargs,
|
main = partial(fn, *args, **kwargs)
|
||||||
)
|
|
||||||
)
|
return trio.run(main)
|
||||||
|
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
||||||
|
|
@ -226,15 +175,7 @@ def pytest_addoption(
|
||||||
|
|
||||||
def pytest_configure(config):
|
def pytest_configure(config):
|
||||||
backend = config.option.spawn_backend
|
backend = config.option.spawn_backend
|
||||||
from tractor.spawn._spawn import try_set_start_method
|
tractor._spawn.try_set_start_method(backend)
|
||||||
try_set_start_method(backend)
|
|
||||||
|
|
||||||
# register custom marks to avoid warnings see,
|
|
||||||
# https://docs.pytest.org/en/stable/how-to/writing_plugins.html#registering-custom-markers
|
|
||||||
config.addinivalue_line(
|
|
||||||
'markers',
|
|
||||||
'no_tpt(proto_key): test will (likely) not behave with tpt backend'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope='session')
|
@pytest.fixture(scope='session')
|
||||||
|
|
@ -273,8 +214,7 @@ def tpt_protos(request) -> list[str]:
|
||||||
|
|
||||||
# XXX ensure we support the protocol by name via lookup!
|
# XXX ensure we support the protocol by name via lookup!
|
||||||
for proto_key in proto_keys:
|
for proto_key in proto_keys:
|
||||||
from tractor.discovery import _addr
|
addr_type = tractor._addr._address_types[proto_key]
|
||||||
addr_type = _addr._address_types[proto_key]
|
|
||||||
assert addr_type.proto_key == proto_key
|
assert addr_type.proto_key == proto_key
|
||||||
|
|
||||||
yield proto_keys
|
yield proto_keys
|
||||||
|
|
@ -285,32 +225,13 @@ def tpt_protos(request) -> list[str]:
|
||||||
autouse=True,
|
autouse=True,
|
||||||
)
|
)
|
||||||
def tpt_proto(
|
def tpt_proto(
|
||||||
request,
|
|
||||||
tpt_protos: list[str],
|
tpt_protos: list[str],
|
||||||
) -> str:
|
) -> str:
|
||||||
proto_key: str = tpt_protos[0]
|
proto_key: str = tpt_protos[0]
|
||||||
|
|
||||||
# ?TODO, but needs a fn-scoped tpt_proto fixture..
|
from tractor import _state
|
||||||
# @pytest.mark.no_tpt('uds')
|
|
||||||
# node = request.node
|
|
||||||
# markers = node.own_markers
|
|
||||||
# for mark in markers:
|
|
||||||
# if (
|
|
||||||
# mark.name == 'no_tpt'
|
|
||||||
# and
|
|
||||||
# proto_key in mark.args
|
|
||||||
# ):
|
|
||||||
# pytest.skip(
|
|
||||||
# f'Test {node} normally fails with '
|
|
||||||
# f'tpt-proto={proto_key!r}\n'
|
|
||||||
# )
|
|
||||||
|
|
||||||
from tractor.runtime import _state
|
|
||||||
if _state._def_tpt_proto != proto_key:
|
if _state._def_tpt_proto != proto_key:
|
||||||
_state._def_tpt_proto = proto_key
|
_state._def_tpt_proto = proto_key
|
||||||
_state._runtime_vars['_enable_tpts'] = [
|
|
||||||
proto_key,
|
|
||||||
]
|
|
||||||
|
|
||||||
yield proto_key
|
yield proto_key
|
||||||
|
|
||||||
|
|
@ -342,11 +263,12 @@ def pytest_generate_tests(
|
||||||
# XXX some weird windows bug with `pytest`?
|
# XXX some weird windows bug with `pytest`?
|
||||||
spawn_backend = 'trio'
|
spawn_backend = 'trio'
|
||||||
|
|
||||||
# drive the valid-backend set from the canonical `Literal` so
|
# TODO: maybe just use the literal `._spawn.SpawnMethodKey`?
|
||||||
# adding a new spawn backend (e.g. `'subint'`) doesn't require
|
assert spawn_backend in (
|
||||||
# touching the harness.
|
'mp_spawn',
|
||||||
from tractor.spawn._spawn import SpawnMethodKey
|
'mp_forkserver',
|
||||||
assert spawn_backend in get_args(SpawnMethodKey)
|
'trio',
|
||||||
|
)
|
||||||
|
|
||||||
# NOTE: used-to-be-used-to dyanmically parametrize tests for when
|
# NOTE: used-to-be-used-to dyanmically parametrize tests for when
|
||||||
# you just passed --spawn-backend=`mp` on the cli, but now we expect
|
# you just passed --spawn-backend=`mp` on the cli, but now we expect
|
||||||
|
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue