Compare commits

...

99 Commits

Author SHA1 Message Date
Guillermo Rodriguez 053078ce8f
Fix rb non ipc case and tests in general 2025-04-22 06:25:45 -03:00
Guillermo Rodriguez 7766caf623
Detect OSError errno EBADF and re-raise as trio.BrokenResourceError on EventFD reads 2025-04-22 06:25:45 -03:00
Guillermo Rodriguez a553446619
Pubsub topics, enc & decoders
Implicit aclose on all channels on ChannelManager aclose
Implicit nursery cancel on pubsub acms
Use long running actor portal for open_{pub,sub}_channel_at fns
Add optional encoder/decoder on pubsub
Add topic system for multiple pub or sub on same actor
Add wait fn for sub and pub channel register
2025-04-22 06:25:45 -03:00
Guillermo Rodriguez 8799cf3b78
Add optional msgpack encoder & decoder to ringbuf apis 2025-04-22 06:25:45 -03:00
Guillermo Rodriguez 86e09a80f4
Log warning instead of exception on pubsub cancelled 2025-04-22 06:25:45 -03:00
Guillermo Rodriguez 59521cd4db
Add fix for cases where sockname len > 100 2025-04-22 06:25:45 -03:00
Guillermo Rodriguez 06103d1f44
Disable parent channel append on get_peer_by_name to_scan 2025-04-22 06:25:45 -03:00
Guillermo Rodriguez 4ca1aaeaeb
Only set shield flag when trio nursery mode is used 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 9b16eeed2f
Fix chan manager close remove_channel call 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez d60a49a853
Check if fdshare module is enable on share_fds function 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez f5513ba005
Adapt ringbuf pubsub to new RBToken owner system 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 39dccbdde7
Add owner semantics to RBToken
Stop exporting `_ringbuf` on `tractor.ipc`
Use absolute imports on `_ringbuf` module
Add more comments and acm helpers for ringbuf allocation functions
Create generic FD sharing actor module in `tractor.linux._fdshare`
Include original allocator actor name as `owner` in RBToken
Auto share FDs of allocated ringbufs
On `attach_ringbuf_*` functions request fds from owner
Adapt all ringbuf tests to new system
2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 5d6fa643ba
Better APIs for ringd and pubsub
Pubsub:
Remove un-necesary ChannelManager locking mechanism
Make ChannelManager.close wait for all channel removals
Make publisher turn switch configurable with `msgs_per_turn` variable
Fix batch_size setter on publisher
Add broadcast to publisher
Add endpoints on pubsub for remote actors to dynamically add and remove channels

Ringd:
Add fifo lock and use it on methods that modify _rings state
Add comments
Break up ringd.open_ringbuf apis into attach_, open_ & maybe_open_
When attaching its no longer a long running context, only on opens
Adapt ringd test to new apis
2025-04-22 06:25:44 -03:00
Guillermo Rodriguez e4868ded54
Tweaks to make cancellation happen correctly on ringbuf receiver & fix test log msg 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez b2f6c298f5
Refactor generate_sample_messages to be a generator and use numpy 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 171545e4fb
Add trio resource semantics to ring pubsub 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 853aa740aa
RingBufferReceiveChannel fixes for the non clean eof case, add comments 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 8e1f95881c
Add trio resource semantics to eventfd 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 1451feb159
Adhere to trio semantics on channels for closed and busy resource cases 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez 3a1eda9d6d
Fix test docstring 2025-04-22 06:25:44 -03:00
Guillermo Rodriguez d942f073e0
Enable ordering assertion & simplify some parts of test 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez d8d01e8b3c
Add header to generic chan orderers 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 1dfc639e54
Fully test and fix bugs on _ringbuf._pubsub
Add generic channel orderer
2025-04-22 06:25:43 -03:00
Guillermo Rodriguez bebd327023
Improve ringd ringbuf lifecycle
Unlink sock after use in fdshare
2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 3568ba5d5d
Rename RingBuff -> RingBuffer
Combine RingBuffer stream and channel apis
Implement RingBufferReceiveChannel.receive_nowait
Make msg generator calculate hash
2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 95ea4647cc
Woops fix old typing Self stuff 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 4385d38bc4
Add header and fix white lines 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez b1e1187a19
Switch to using typing.Protocl instead of abc.ABC on ChannelManager, improve abstraction and add comments 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 4b9d6b9276
Improve error handling in fdshare functions, add comments 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 28b86cb880
Dont use relative import on ringd 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez e34b6519c7
recv_fds doesnt need to be an acm 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 6646deb7f4
Add LICENSE headers and module docstring on new files 2025-04-22 06:25:43 -03:00
Guillermo Rodriguez 1bb9918e2d
Add ringd test, which also tests fd share 2025-04-22 06:25:42 -03:00
Guillermo Rodriguez 9238c6b245
Linux top-level submodule and ipc._ringbuf submodule
Added ringd actor to broker ring buf resources dynamically
Added ring pubsub based on ringd
Created tractor.linux submodule and moved eventfd stuff there
Implemented linux file descriptor ipc share async helpers
2025-04-22 06:25:42 -03:00
Guillermo Rodriguez f0af419ab2
Important RingBuffBytesSender fix on non batched mode! & downgrade nix-shell python to lowest supported 2025-04-22 06:25:42 -03:00
Guillermo Rodriguez 3b5ade7118
Catch trio cancellation on RingBuffReceiver bg eof listener task, add batched mode to RingBuffBytesSender 2025-04-22 06:25:42 -03:00
Guillermo Rodriguez ce09c70a74
Add direct read method on EventFD
Type hint all ctx managers in _ringbuf.py
Remove unnecesary send lock on ring chan sender
Handle EOF on ring chan receiver
Rename ringbuf tests to make it less redundant
2025-04-22 06:25:42 -03:00
Guillermo Rodriguez 9f788e07d4
Add direct ctx managers for RB channels 2025-04-22 06:25:42 -03:00
Guillermo Rodriguez 69ceee09f2
Improve test_ringbuf test, drop MsgTransport ring buf impl for now in favour of a trio.abc.Channel[bytes] impl, add docstrings 2025-04-22 06:25:42 -03:00
Guillermo Rodriguez a7df2132fa
Switch `tractor.ipc.MsgTransport.stream` type to `trio.abc.Stream`
Add EOF signaling mechanism
Support proper `receive_some` end of stream semantics
Add StapledStream non-ipc test
Create MsgpackRBStream similar to MsgpackTCPStream for buffered whole-msg reads
Add EventFD.read cancellation on EventFD.close mechanism using cancel scope
Add test for eventfd cancellation
Improve and add docstrings
2025-04-22 06:25:42 -03:00
Guillermo Rodriguez dd1c0fa51d
Better encapsulate RingBuff ctx managment methods and support non ipc usage
Add trio.StrictFIFOLock on sender.send_all
Support max_bytes argument on receive_some, keep track of write_ptr on receiver
Add max_bytes receive test test_ringbuf_max_bytes
Add docstrings to all ringbuf tests
Remove EFD_NONBLOCK support, not necesary anymore since we can use abandon_on_cancel=True on trio.to_thread.run_sync
Close eventfd's after usage on open_ringbuf
2025-04-22 06:25:42 -03:00
Guillermo Rodriguez 6ee5e3e077
Refinements, fix dec_hook builtins and same type bug 2025-04-22 06:24:01 -03:00
Guillermo Rodriguez 470acd98cc
Fix typing on mk_boxed_ext_structs 2025-04-22 05:04:14 -03:00
Guillermo Rodriguez bb37c31a70
Update CI to use uv 2025-04-22 05:00:13 -03:00
Guillermo Rodriguez 99c383d3c1
Change test structs name to not get conflicts with pytest 2025-04-22 04:37:08 -03:00
Guillermo Rodriguez 51746a71ac
Re-add boxed struct type system on _codec & create enc/dec hook auto factory 2025-04-22 04:30:30 -03:00
Tyler Goodlet 112ed27cda Move peer-tracking attrs from `Actor` -> `IPCServer`
Namely transferring the `Actor` peer-`Channel` tracking attrs,
- `._peers` which maps the uids to client channels (with duplicates
  apparently..)
- the `._peer_connected: dict[tuple[str, str], trio.Event]` child-peer
  syncing table mostly used by parent actors to wait on sub's to connect
  back during spawn.
- the `._no_more_peers = trio.Event()` level triggered state signal.

Further we move over with some minor reworks,
- `.wait_for_peer()` verbatim (adjusting all dependants).
- factor the no-more-peers shielded wait branch-block out of
  the end of `async_main()` into 2 new server meths,
  * `.has_peers()` with optional chan-connected checking flag.
  * `.wait_for_no_more_peers()` which *just* does the
    maybe-shielded `._no_more_peers.wait()`
2025-04-11 18:11:35 -04:00
Tyler Goodlet 42cf9e11a4 Mv `Actor._stream_handler()` to `.ipc._server` func
Call it `handle_stream_from_peer()` and bind in the `actor: Actor` via
a `handler=partial()` to `trio.serve_listeners()`.

With this (minus the `Actor._peers/._peer_connected/._no_more_peers`
attrs ofc) we get nearly full separation of IPC-connection-processing
(concerns) from `Actor` state. Thus it's a first look at modularizing
the low-level runtime into isolated subsystems which will hopefully
improve the entire code base's grok-ability and ease any new feature
design discussions especially pertaining to introducing and/or
composing-together any new transport protocols.
2025-04-11 14:51:52 -04:00
Tyler Goodlet 1ccb14455d Passthrough `_pause()` kwargs from `_maybe_enter_pm()` 2025-04-11 01:16:46 -04:00
Tyler Goodlet d534f1491b Fix assert on `.devx.maybe_open_crash_handler()` delivered `bxerr` 2025-04-11 01:16:12 -04:00
Tyler Goodlet 0f8b299b4f Improve bit of tooling for `test_resource_cache.py`
Namely while what I was actually trying to solve was why
`TransportClosed` was getting raised from `Portal.cancel_actor()` but
still useful edge case auditing either way. Also opts into the
`debug_mode` fixture with apprope timeout adjustment B)
2025-04-11 01:12:34 -04:00
Tyler Goodlet 9807318e3d Never hide non-[msgtype/tpt-closed] error tbs in `Channel.send()` 2025-04-11 00:00:12 -04:00
Tyler Goodlet b700d90e09 Set `_state._def_tpt_proto` in `tpt_proto` fixture
Such that the global test-session always (and only) runs against the CLI
specified `--tpt-proto=` transport protocol.
2025-04-10 23:56:47 -04:00
Tyler Goodlet 6ff3b6c757 Use `current_ipc_protos()` as the `enable_transports`-default-when-`None`
Also ensure we assertion-error whenever the list is > 1 entry for now!
2025-04-10 23:55:47 -04:00
Tyler Goodlet 8bda59c23d Add `_state.current_ipc_protos()`
For now just wrapping wtv the `._def_tpt_proto` per-actor setting is.
2025-04-10 23:53:44 -04:00
Tyler Goodlet 1628fd1d7b Another `tn` eg-loosify inside `ActorNursery.cancel()`.. 2025-04-10 23:53:35 -04:00
Tyler Goodlet 5f74ce9a95 Absorb `TransportClosed` in `Portal.cancel_actor()`
Just like we *were* for the `trio`-resource-errors it normally wraps
since we now also do the same wrapping in `MsgpackTransport.send()`
and we don't normally care to raise tpt-closure-errors on graceful actor
cancel requests.

Also, warn-report any non-tpt-closed low-level `trio` errors we haven't
yet re-wrapped (likely bc they haven't shown up).
2025-04-10 23:49:36 -04:00
Tyler Goodlet 477343af53 Add `TransportClosed.from_src_exc()`
Such that re-wrapping/raising from a low-level `trio` resource error is
simpler and includes the `.src_exc` in the `__repr__()` and
`.message/.args` rendered at higher layers (like from `Channel` and
`._rpc` machinery).

Impl deats,
- mainly leverages packing in a new cls-method `.repr_src_exc() -> str:`
  repr of the underlying error before an optional `body: str` all as
  handled by the previously augmented `.pformat()`'s delegation to
  `pformat_exc()`.
- change `.src_exc` to be a property around a renamed `._src_exc`.

But wait, why?
- use it inside `MsgpackTransport.send()` to rewrap any
  `trio.BrokenResourceError`s so we always see the underlying
  `trio`-src-exc just like in the `.recv()._iter_packets()` handlers.
2025-04-10 23:37:16 -04:00
Tyler Goodlet c208bcbb1b Factor actor-embedded IPC-tpt-server to `ipc` subsys
Primarily moving the `Actor._serve_forever()`-task-as-method and
supporting actor-instance attributes to a new `.ipo._server` sub-mod
which now encapsulates,
- the coupling various `trio.Nursery`s (and their independent lifetime mgmt)
  to different `trio.serve_listener()`s tasks and `SocketStream`
  handler scopes.
- `Address` and `SocketListener` mgmt and tracking through the idea of
  an "IPC endpoint": each "bound-and-active instance" of a served-listener
  for some (varied transport protocol's socket) address.
- start and shutdown of the entire server's lifetime via an `@acm`.
- delegation of starting/stopping tpt-protocol-specific `trio.abc.Listener`s
  to the corresponding `.ipc._<proto_key>` sub-module (newly defined
  mod-top-level instead of `Address` method) `start/close_listener()`
  funcs.

Impl details of the `.ipc._server` sub-sys,
- add new `IPCServer`, allocated with `open_ipc_server()`, and which
  encapsulates starting multiple-transport-proto-`trio.abc.Listener`s
  from an input set of `._addr.Address`s using,
  |_`IPCServer.listen_on()` which internally spawns tasks that delegate to a new
    `_serve_ipc_eps()`, a rework of what was (effectively)
    `Actor._serve_forever()` and which now,
    * allocates a new `IPCEndpoint`-struct (see below) for each
      address-listener pair alongside the specified
      listener-serving/stream-handling `trio.Nursery`s provided by the
      caller.
    * starts and stops each transport (socket's) listener by calling
      `IPCEndpoint.start/close_listener()` which in turn delegates to
      the underlying `inspect.getmodule(IPCEndpoint.addr)` backend tpt
      module's equivalent impl.
    * tracks all created endpoints in a `._endpoints: list[IPCEndpoint]`
      which is further exposed through public properties for
      introspection of served transport-protocols and their addresses.
  |_`IPCServer._[parent/stream_handler]_tn: Nursery`s which are either
     allocated (in which case, as the same instance) or provided by the
     caller of `open_ipc_server()` such that the same nursery-cancel-scope
     controls offered by `trio.serve_listeners(handler_nursery=)` are
     offered where the `._parent_tn` is used to spawn `_serve_ipc_eps()`
     tasks, and `._stream_handler_tn` is passed verbatim as `handler_nursery`.
- a new `IPCEndpoint`-struct (as mentioned) which wraps each
  transport-proto's address + listener + allocated-supervising-nursery
  to encapsulate the "lifetime of a server IPC endpoint" such that
  eventually we can track and managed per-protocol/address/`.listen_on()`-call
  scoped starts/stops/restarts for the purposes of filtering/banning
  peer traffic.
  |_ also included is an unused `.peer_tpts` table which we can
    hopefully use to replace `Actor._peers` in a `Channel`-tracking
    transport-proto-aware way!

Surrounding changes to `.ipc.*` primitives to match,
- make `[TCP|UDS]Address` types `msgspec.Struct(frozen=True)` and thus
  drop any-and-all `addr._host =` style mutation throughout.
  |_ as such also drop their `.__init__()` and `.__eq__()` meths.
  |_ UDS tweaks to field names and thus `.__repr__()`.
- move `[TCP|UDS]Address.[start/close]_listener()` meths to be mod-level
  equiv `start|close_listener()` funcs.
- just hard code the `.ipc._types._key_to_transport/._addr_to_transport`
  table entries instead of all the prior fancy dynamic class property
  reading stuff (remember, "explicit is better then implicit").

Modified in `._runtime.Actor` internals,
- drop the `._serve_forever()` and `.cancel_server()`, methods and
  `._server_down` waiting logic from `.cancel_soon()`
- add `.[_]ipc_server` which is opened just after the `._service_n` and
  delegate to it for any equivalent publicly exposed instance
  attributes/properties.
2025-04-10 23:18:32 -04:00
Tyler Goodlet c9e9a3949f Move concrete `Address`es to each tpt module
That is moving from `._addr`,
- `TCPAddress` to `.ipc._tcp`
- `UDSAddress` to `.ipc._uds`

Obviously this requires adjusting a buncha stuff in `._addr` to avoid
import cycles (the original reason the module was not also included in
the new `.ipc` subpkg) including,

- avoiding "unnecessary" imports of `[Unwrapped]Address` in various modules.
  * since `Address` is a protocol and the main point is that it **does
    not need to be inherited** per
    (https://typing.python.org/en/latest/spec/protocol.html#terminology)
    thus I removed the need for it in both transport submods.
  * and `UnwrappedAddress` is a type alias for tuples.. so we don't
    really always need to be importing it since it also kinda obfuscates
    what the underlying pairs are.
- not exporting everything in submods at the `.ipc` top level and
  importing from specific submods by default.
- only importing various types under a `if typing.TYPE_CHECKING:` guard
  as needed.
2025-04-08 10:09:52 -04:00
Tyler Goodlet 8fd7d1cec4 Add API-modernize-todo on `experimental._pubsub.fan_out_to_ctxs` 2025-04-06 22:06:42 -04:00
Tyler Goodlet 0cb011e883 Skip the ringbuf test mod for now since data-gen is a bit "heavy/laggy" atm 2025-04-06 22:06:42 -04:00
Tyler Goodlet 74df5034c0 Improve `TransportClosed.__repr__()`, add `src_exc`
By borrowing from the implementation of `RemoteActorError.pformat()`
which is now factored into a new `.devx.pformat_exc()` and re-used for
both error types while maintaining the same func-sig. Obviously delegate
`RemoteActorError.pformat()` to the new helper accordingly and keeping
the prior `body` generation from `.devx.pformat_boxed_tb()` as before.

The new helper allows for,
- passing any of a `header|message|body: str` which are all combined in
  that order in the final output.
- getting the `exc.message` as the default `message` part.
- generating an objecty-looking "type-name" header to be rendered by
  default when `header` is not overridden.
- "first-line-of `message`" processing which we split-off and then
  re-inject as a `f'<{type(exc).__name__}( {first} )>'` top line header.
- an optional `tail: str = '>'` to "close the object"-look only added
  when `with_type_header: bool = True`.

Adjustments to `TransportClosed` around this include,
- replacing the init `cause` arg for a `src_exc` which is now always
  assigned to a same named instance var.
- displaying that new `.src_exc` in the `body: str` arg to the
  `.devx.pformat.pformat_exc()` call so you can always see the
  underlying (normally `trio`) source error.
- just make it inherit from `Exception` not `trio.BrokenResourceError`
  to avoid handlers catching `TransportClosed` as the former
  particularly in testing when we want to sometimes to distinguish them.
2025-04-06 22:06:42 -04:00
Tyler Goodlet 692bd0edf6 Handle unconsidered fault-edge cases for UDS
In `tests/test_advanced_faults.py` that is.
Since instead of zero-responses like we'd expect from a network-socket
we actually can get a few differences from the OS when "everything IPC
is known"

XD

Namely it's about underlying `trio` exceptions versus how we wrap them
and how we expect to box them. A `TransportClosed` boxing improvement
is coming in follow up btw to make this all work!

B)
2025-04-06 22:06:42 -04:00
Tyler Goodlet c21b9cdf57 Woops, ensure we use `global` before setting `daemon()` fixture spawn delay.. 2025-04-06 22:06:42 -04:00
Tyler Goodlet 0e25c16572 Support multiple IPC transports in test harness!
Via a new accumulative `--tpt-proto` arg you can select which
`tpt_protos: list[str]`-fixture protocol keys will be delivered to
opting in tests!

B)

Also includes,
- CLI quote handling/stripping.
- default of 'tcp'.
- only support one selection per session at the moment (until we figure
  out how we want to support multiples, either simultaneously or
  sequentially).
- draft a (masked) dynamic-`metafunc` parametrization in the
  `pytest_generate_tests()` hook.
- first proven and working use in the `test_advanced_faults`-suite (and
  thus its underlying
  `examples/advanced_faults/ipc_failure_during_stream.py` script)!
 |_ actually needed this to prove that the suite only has 2 failures on
    'uds' seemingly due to low-level `trio` error semantics translation
    differences to do with with calling `socket.close()`..

On a very nearly related topic,
- draft an (also commented out) `set_script_runtime_args()` fixture idea
  for a std way of `partial`-ling in runtime args to `examples/`
  scripts-as-modules defining a `main()` which would proxy to
  `tractor.open_nursery()`.
2025-04-06 22:06:42 -04:00
Tyler Goodlet 1d4513eb5d Unwrap `UDSAddress` as `tuple[str, str]`, i.e. sin pid
Since in hindsight the real analog of a net-proto's "bindspace"
(normally its routing layer's addresses-port-set) is more akin to the
"location in the file-system" for a UDS socket file (aka the file's
parent directory) determines whether or not the "port" (aka it's
file-name) collides with any other.

So the `._filedir: Path` is like the allocated "address" and,
the `._filename: Path|str` is basically the "port",

at least in my mind.. Bp

Thinking about fs dirs like a "host address" means you can get
essentially the same benefits/behaviour of say an (ip)
addresses-port-space but using the (current process-namespace's)
filesys-tree. Note that for UDS sockets in particular the
network-namespace is what would normally isolate so called "abstract
sockets" (i.e. UDS sockets that do NOT use file-paths by setting `struct
sockaddr_un.sun_path = 'abstract', see `man unix`); using directories is
even easier and definitely more explicit/readable/immediately-obvious as
a human-user.

As such this reworks all the necessary `UDSAddress` meths,
- `.unwrap()` now returns a `tuple(str(._filedir, str(._filename))`,
- `wrap_address()` now matches UDS on a 2nd tuple `str()` element,
- `.get_root()` no longer passes `maybe_pid`.

AND adjusts `MsgpackUDSStream` to,
- use the new `unwrap_sockpath()` on the `socket.get[sock/peer]name()`
  output before passing directly as `UDSAddress.__init__(filedir, filename)`
  instead of via `.from_addr()`.
- also pass `maybe_pid`s to init since no longer included in the
  unwrapped-type form.
2025-04-06 22:06:42 -04:00
Tyler Goodlet 3d3a1959ed s/`._addr.preferred_transport`/`_state._def_tpt_proto`
Such that the "global-ish" setting (actor-local) is managed with the
others per actor-process and type it as a `Literal['tcp', 'uds']` of the
currently support protocol keys.

Here obvi `_tpt` is some kinda shorthand for "transport" and `_proto` is
for "protocol" Bp

Change imports and refs in all dependent modules.

Oh right, and disable UDS in `wrap_address()` for the moment while
i figure out how to avoid the unwrapped type collision..
2025-04-06 22:06:42 -04:00
Tyler Goodlet 9e812d7793 Add `Arbiter.is_registry()` in prep for proper `.discovery._registry` 2025-04-06 22:06:42 -04:00
Tyler Goodlet 789bb7145b Repair weird spawn test, start `test_root_runtime`
There was a very strange legacy test
`test_spawning.test_local_arbiter_subactor_global_state` which was
causing unforseen hangs/errors on the UDS tpt and looking deeper this
test was already doing root-actor things that should never have been
valid XD

So rework that test to properly demonstrate something of value
(i guess..) and add a new suite which start more rigorously auditing our
`open_root_actor()` permitted usage.

For the old test,
- since the main point of this test seemed to be the ability to invoke
  the same function in both the parent and child actor (using the very
  legacy `ActorNursery.run_in_actor()`.. due to be deprecated) rename it
  to `test_run_in_actor_same_func_in_child`,
- don't re-enter `.open_root_actor()` since that's invalid usage (tested
  in new suite see below),
- adjust some `spawn()` arg/var naming and ensure we only return in the
  child.

For the new suite add tests for,
- ensuring the implicit `open_root_actor()` call under `open_nursery()`.
- double open of `open_root_actor()` from within the same process tree
  both from a root and sub.

Intro some new `_exceptions` used in the new suite,
- a top level `RuntimeFailure` for generically expressing faults not of
  our own doing that prevent successful operation; this is what we now
  (changed in this commit) raise on attempts to open a 2nd root.
- mk `ActorFailure` derive from the former; it's already used from
  `._spawn` when subprocs fail to boot.
2025-04-06 22:06:42 -04:00
Tyler Goodlet b05c5b6c50 Some more log message tweaks
- aggregate the `MsgStream.aclose()` "reader tasks" stats content into a
  common `message: str` before emit.
- tweak an `_rpc.process_messages()` emit per new `Channel.__repr__()`.
2025-04-06 22:06:42 -04:00
Tyler Goodlet f6a4a0818f Change some low-hanging `.uid`s to `.aid`
Throughout `_context` and `_spawn` where it causes no big disruption.
Still lots to work out for things like how to pass `--uid
<tuple-as-str>` to spawned subactors and whether we want a diff name for
the minimum `tuple` required to distinguish a subactor pre-process-ID
allocation by the OS.
2025-04-06 22:06:42 -04:00
Tyler Goodlet a045c78e4d Mv to `Channel._do_handshake()` in `open_portal()`
As per the method migration in the last commit. Also adjust all `.uid`
usage to the new `.aid`.
2025-04-06 22:06:42 -04:00
Tyler Goodlet c85606075d Mv `Actor._do_handshake()` to `Channel`, add `.aid`
Finally.. i've been meaning todo this for ages since the
actor-id-swap-as-handshake is better layered as part of the IPC msg-ing
machinery and then let's us encapsulate the connection-time-assignment
of a remote peer's `Aid` as a new `Channel.aid: Aid`. For now we
continue to offer the `.uid: tuple[str, str]` attr (by delegating to the
`.uid` field) since there's still a few things relying on it in the
runtime and ctx layers

Nice bonuses from this,
- it's very easy to get the peer's `Aid.pid: int` from anywhere in an
  IPC ctx by just reading it from the chan.
- we aren't saving more then the wire struct-msg received.

Also add deprecation warnings around usage to get us moving on porting
the rest of consuming runtime code to the new attr!
2025-04-06 22:06:42 -04:00
Tyler Goodlet 7d200223fa UDS: translate file dne to connection-error
For the case where there's clearly no socket file created/bound
obviously the `trio.socket.connect()` call will raise
`FileNotFoundError`, so just translate this to
a builtin-`ConnectionError` at the transport layer so we can report the
guilty `UDSAddress`.
2025-04-06 22:06:42 -04:00
Tyler Goodlet 4244db2f08 More `._addr` boxing refinements
The more I think about it, it seems @guille's orig approach of
unwrapping UDS socket-file addresses to strings (or `Path`) is making
the most sense. I had originally thought that pairing it with the
listening side's pid would add clarity (and it definitely does for
introspection/debug/logging) but since we don't end up passing that pid
to the eventual `.connect()` call on the client side, it doesn't make
much sense to wrap it for the wire just to discard.. Further, the
`tuple[str, int]` makes `wrap_address()` break for TCP since it will
always match on uds first.

So, on that note this patch refines a few things in prep for going back
to that original `UnwrappedAddress` as `str` type though longer run
i think the more "builtin approach" would be to add `msgspec` codec
hooks for these types to avoid all the `.wrap()`/`.unwrap()` calls
throughout the runtime.

Down-low deats,
- add `wrap_address()` doc string, detailed (todo) comments and handle
  the `[None, None]` case that can come directly from
  `._state._runtime_vars['_root_mailbox']`.
- buncha adjustments to `UDSAddress`,
  - add a `filedir`, chng `filepath` -> `filename` and mk `maybe_pid` optional.
  - the intent `filedir` is act as the equivalent of the host part in a network proto's
    socket address and when it's null use the `.def_bindspace = get_rt_dir()`.
  - always ensure the `filedir / filename` is an absolute path and
    expose it as a new `.sockpath: Path` property.
  - mk `.is_valid` actually verify the `.sockpath` is in the valid
    `.bindspace: namely just checking it's in the expected dir.
  - add pedantic `match:`ing to `.from_addr()` such that we error on
    unexpected `type(addr)` inputs and otherwise parse any `sockpath:
    Path` inputs using a new `unwrap_sockpath()` which simply splits an
    abs file path to dir, file-name parts.
  - `.unwrap()` now just `str`-ifies the `.sockpath: Path`
  - adjust `.open/close_listener()` to use `.sockpath`.
2025-04-06 22:06:42 -04:00
Tyler Goodlet 52901a8e7d Move `DebugRequestError` to `._exceptions` 2025-04-06 22:06:42 -04:00
Tyler Goodlet eb11235ec8 Start protoyping multi-transport testing
Such that we can run (opting-in) tests on both TCP and UDS backends and
ensure the `reg_addr` fixture and various timeouts are adjusted
accordingly.

Impl deats,
- add a new `tpc_proto` CLI option and fixture to allow choosing which
  "transport protocol" will be used in the test suites (either globally
  or contextually).
- rm `_reg_addr` instead opting for a `_rando_port` which will only be
  used for `reg_addr`s which are net-tpt-protos.
- rejig `reg_addr` fixture to set a ideally session-unique `testrun_reg_addr`
  based on the `tpt_proto` setting making appropriate calls to `._addr`
  APIs as needed.
- refine `daemon` fixture a bit with typing, `tpt_proto` timings, and
  stderr capture.
- in `test_discovery` do a ton of type-annots, add `debug_mode` fixture
  opt ins, augment `spawn_and_check_registry()` with `psutil.Process`
  passing for introspection (when things go wrong..).
2025-04-06 22:06:42 -04:00
Tyler Goodlet c8d164b211 Add `psutil` to `--dev` / testing deps 2025-04-06 22:06:42 -04:00
Tyler Goodlet 00b5bb777d Factor `breakpoint()` blocking into `@acm`
Call it `maybe_block_bp()` can wrap the `open_root_actor()` body with
it. Main reason is to guarantee we can bp inside actor runtime bootup as
needed when debugging internals! Prolly should factor this to another
module tho?

ALSO, ensure we RTE on recurrent entries to `open_root_actor()` from
within an existing tree! There was actually `test_spawning` test somehow
getting away with this!? Should never be possible or allowed!
2025-04-06 22:06:42 -04:00
Tyler Goodlet 674a33e3b1 Add an `Actor.pformat()`
And map `.__repr__/__str__` to it and add various new fields to fill it
out,
- drop `self.uid` as var and instead add `Actor._aid: Aid` and proxy to
  it for the various `.name/.uid/.pid` properties as well as a new
  `.aid` field.
 |_ the `Aid.pid` addition is also included.

Other improvements,
- flip to a sync call to `Address.close_listener()`.
- track the `async_main()` parent task as `Actor._task`.
- add exception logging around failure to bind due to already-in-use
  when calling `add.open_listener()` in `._stream_forever()`; sometimes
  the error might be overridden by something else during the
  runtime-failure unwind..
2025-04-06 22:06:42 -04:00
Tyler Goodlet a49bfddf32 Add a `MsgpackTransport.pformat()`
And map `.__repr__/__str__` to it. Also adjust to new
`Address.proto_key` and add a #TODO for a `.get_peers()`.
2025-04-06 22:06:42 -04:00
Tyler Goodlet e025959d60 Even more `tractor._addr.Address` simplifying
Namely reducing the duplication of class-fields and `TypeVar`s used
for parametrizing the `Address` protocol type,
- drop all of the `TypeVar` types and just stick with all concrete addrs
  types inheriting from `Address` only.
- rename `Address.name_key` -> `.proto_key`.
- rename `Address.address_type` -> `.unwrapped_type`
- rename `.namespace` -> `.bindspace` to better reflect that this "part"
  of the address represents the possible "space for binding endpoints".
 |_ also linux already uses "namespace" to mean the `netns` and i'd
   prefer to stick with their semantics for that.
- add `TCPAddress/UDSAddress.def_bindspace` values.
- drop commented `.open_stream()` method; never used.
- simplify `UnwrappedAdress` to just a `tuple` of union types.
- add logging to `USDAddress.open_listener()` for now.
- adjust `tractor.ipc/_uds/tcp` transport to use new addr field names.
2025-04-06 22:06:42 -04:00
Tyler Goodlet d0414709f2 Handle broken-pipes from `MsgpackTransport.send()`
Much like we already do in the `._iter_packets()` async-generator which
delivers to `.recv()` and `async for`, handle the `''[Errno 32] Broken
pipe'` case that can show up with unix-domain-socket usage.

Seems like the cause is due to how fast the socket can be torn down
during a registry addr channel ping where,
- the sending side can break the connection faster then the pong side
  can prep its handshake msg,
- the pong side tries to send it's handshake pkt via
  `.SocketStream.send_all()` after the breakage and then raises
  `trio.BrokenResourceError`.
2025-04-06 22:06:41 -04:00
Tyler Goodlet b958590212 Emphasize internal error block header-comment a bit 2025-04-06 22:06:41 -04:00
Tyler Goodlet 8884ed05f0 Bit of multi-line styling for `LocalPortal` 2025-04-06 22:06:41 -04:00
Tyler Goodlet a403958c2c Adjust `._child` instantiation of `Actor` to use newly named `uuid` arg 2025-04-06 22:06:41 -04:00
Tyler Goodlet 009cadf28e Add `bidict` pkg as dep since used in `._addr` for now 2025-04-06 22:06:41 -04:00
Tyler Goodlet 3cb8f9242d Adjust lowlevel-tb hiding logic for `MsgStream`
Such that whenev the `self._ctx.chan._exc is trans_err` we suppress.
I.e. when the `Channel._exc: Exception|None` error **is the same as**
set by the `._rpc.process_messages()` loop (that is, set to the
underlying transport layer error), we suppress the lowlevel tb,
otherwise we deliver the full tb since likely something at the lowlevel
that we aren't detecting changed/signalled/is-relevant!
2025-04-06 22:06:41 -04:00
Tyler Goodlet 544b5bdd9c Slight typing and multi-line styling tweaks in `.ipc` sugpkg 2025-04-06 22:06:38 -04:00
Tyler Goodlet 47d66e6c0b Add a big boi `Channel.pformat()/__repr__()`
Much like how `Context` has been implemented, try to give tons of high
level details on all the lower level encapsulated primitives, namely the
`.msgstream/.transport` and any useful runtime state.

B)

Impl deats,
- adjust `.from_addr()` to only call `._addr.wrap_address()` when we
  detect `addr` is unwrapped.
- add another `log.runtime()` using the new `.__repr__()` in
  `Channel.from_addr()`.
- change to `UnwrappedAddress` as in prior commits.
2025-04-06 22:03:07 -04:00
Tyler Goodlet ddeab1355a Allocate bind-addrs in subactors
Previously whenever an `ActorNursery.start_actor()` call did not receive
a `bind_addrs` arg we would allocate the default `(localhost, 0)` pairs
in the parent, for UDS this obviously won't work nor is it ideal bc it's
nicer to have the actor to be a socket server (who calls
`Address.open_listener()`) define the socket-file-name containing their
unique ID info such as pid, actor-uuid etc.

As such this moves "random" generation of server addresses to the
child-side of a subactor's spawn-sequence when it's sin-`bind_addrs`;
i.e. we do the allocation of the `Address.get_random()` addrs inside
`._runtime.async_main()` instead of `Portal.start_actor()` and **only
when** `accept_addrs`/`bind_addrs` was **not provided by the spawning
parent**.

Further this patch get's way more rigorous about the `SpawnSpec`
processing in the child inside `Actor._from_parent()` such that we
handle any invalid msgs **very loudly and pedantically!**

Impl deats,
- do the "random addr generation" in an explicit `for` loop (instead of
  prior comprehension) to allow for more detailed typing of the layered
  calls to the new `._addr` mod.
- use a `match:/case:` for process any invalid `SpawnSpec` payload case
  where we can instead receive a `MsgTypeError` from the `chan.recv()`
  call in `Actor._from_parent()` to raise it immediately instead of
  triggering downstream type-errors XD
  |_ as per the big `#TODO` we prolly want to take from other callers
     of `Channel.recv()` (like in the `._rpc.process_messages()` loop).
  |_ always raise `InternalError` on non-match/fall-through case!
  |_ add a note about not being able to use `breakpoint()` in this
     section due to causality of `SpawnSpec._runtime_vars` not having
     been processed yet..
  |_ always return a third element from `._from_rent()` eventually to be
     the `preferred_transports: list[str]` from the spawning rent.
- use new `._addr.mk_uuid()` and pass to new `Actor.__init__(uuid: str)`
  for all actor creation (including in all the mods tweaked here).
- Move to new type-alias-name `UnwrappedAddress` throughout.
2025-04-06 22:03:07 -04:00
Tyler Goodlet cb6c10bbe9 Adjust imports to use new `UnwrappedAddress`
For those mods where it's just a type-alias (name) import change.
2025-04-06 22:03:07 -04:00
Tyler Goodlet bf9d7ba074 Implement peer-info tracking for UDS streams
Such that any UDS socket pair is represented (and with the recent
updates to) a `USDAddress` via a similar pair-`tuple[str, int]` as TCP
sockets, a pair of the `.filepath: Path` & the peer proc's `.pid: int`
which we read from the underlying `socket.socket` using
`.set/getsockopt()` calls

Impl deats,
- using the Linux specific APIs, we add a `get_peer_info()` which reads
  the `(pid, uid, gid)` using the `SOL_SOCKET` and `SOL_PEECRED` opts to
  `sock.getsockopt()`.
  |_ this presumes the client has been correspondingly configured to
     deliver the creds via a `sock.setsockopt(SOL_SOCKET, SO_PASSCRED,
     1)` call - this required us to override `trio.open_unix_socket()`.
- override `trio.open_unix_socket()` as per the above bullet to ensure
  connecting peers always transmit "credentials" options info to the
  listener.
- update `.get_stream_addrs()` to always call `get_peer_info()` and
  extract the peer's pid for the `raddr` and use `os.getpid()` for
  `laddr` (obvi).
  |_ as part of the new impl also `log.info()` the creds-info deats and
    socket-file path.
  |_ handle the oddity where it depends which of `.getpeername()` or
    `.getsockname()` will return the file-path; i think it's to do with
    who is client vs. server?

Related refinements,
- set `.layer_key: int = 4` for the "transport layer" ;)
- tweak some typing and multi-line unpacking in `.ipc/_tcp`.
2025-04-06 22:03:07 -04:00
Tyler Goodlet 4a8a555bdf Rework/simplify transport addressing
A few things that can fundamentally change,

- UDS addresses now always encapsulate the local and remote pid such
  that it denotes each side's process much like a TCP *port*.
  |_ `.__init__()` takes a new `maybe_pid: int`.
  |_ this required changes to the `.ipc._uds` backend which will come in
     an subsequent commit!
  |_ `UDSAddress.address_type` becomes a `tuple[str, int]` just like the
      TCP case.
  |_ adjust `wrap_address()` to match.
- use a new `_state.get_rt_dir() -> Path` as the default location for
  UDS socket file: now under `XDG_RUNTIME_DIR'/tractor/` subdir by
  default.
- re-implement `USDAddress.get_random()` to use both the local
  `Actor.uid` (if available) and at least the pid for its socket file
  name.

Removals,
- drop the loop generated `_default_addrs`, simplify to just
  `_default_lo_addrs` for per-transport default registry addresses.
  |_ change to `_address_types: dict[str, Type[Address]]` instead of
     separate types `list`.
  |_ adjust `is_wrapped_addr()` to just check `in _addr_types.values()`.
- comment out `Address.open_stream()` it's unused and i think the wrong
  place for this API.

Renames,
- from `AddressTypes` -> `UnwrappedAddress`, since it's a simple type
  union and all this type set is, is the simple python data-structures
  we encode to for the wire.
  |_ see note about possibly implementing the `.[un]wrap()` stuff as
     `msgspec` codec `enc/dec_hook()`s instead!

Additions,
- add a `mk_uuid()` to be used throughout the runtime including for
  generating the `Aid.uuid` part.
- tons of notes around follow up refinements!
2025-04-06 22:03:07 -04:00
Guillermo Rodriguez 1762b3eb64 Trying to make full suite pass with uds 2025-04-06 22:02:24 -04:00
Guillermo Rodriguez 486f4a3843 Finally switch to using address protocol in all runtime 2025-04-06 22:02:18 -04:00
Guillermo Rodriguez d5e0b08787 Add root and random addr getters on MsgTransport type 2025-04-06 21:59:29 -04:00
Guillermo Rodriguez f80a47571a Starting to make `.ipc.Channel` work with multiple MsgTransports 2025-04-06 21:58:45 -04:00
59 changed files with 8205 additions and 2512 deletions

View File

@ -15,18 +15,19 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Setup python - name: Install the latest version of uv
uses: actions/setup-python@v2 uses: astral-sh/setup-uv@v5
with:
python-version: '3.11'
- name: Install dependencies - name: Setup env
run: pip install -U . --upgrade-strategy eager -r requirements-test.txt run: uv venv .venv --python=3.11
- name: Install
run: uv sync --group=dev
- name: Run MyPy check - name: Run MyPy check
run: mypy tractor/ --ignore-missing-imports --show-traceback run: uv run mypy tractor/ --ignore-missing-imports --show-traceback
# test that we can generate a software distribution and install it # test that we can generate a software distribution and install it
# thus avoid missing file issues after packaging. # thus avoid missing file issues after packaging.
@ -36,18 +37,19 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Setup python - name: Install the latest version of uv
uses: actions/setup-python@v2 uses: astral-sh/setup-uv@v5
with:
python-version: '3.11' - name: Setup env
run: uv venv .venv --python=3.11
- name: Build sdist - name: Build sdist
run: python setup.py sdist --formats=zip run: uv build --sdist
- name: Install sdist from .zips - name: Install sdist from .zips
run: python -m pip install dist/*.zip run: uv run pip install dist/*.tar.gz
testing-linux: testing-linux:
@ -67,23 +69,23 @@ jobs:
] ]
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Setup python - name: Install the latest version of uv
uses: actions/setup-python@v2 uses: astral-sh/setup-uv@v5
with:
python-version: '${{ matrix.python }}' - name: Setup env
run: uv venv .venv --python=3.11
- name: Install dependencies - name: Install dependencies
run: pip install -U . -r requirements-test.txt -r requirements-docs.txt --upgrade-strategy eager run: uv sync --all-groups
- name: List dependencies - name: List dependencies
run: pip list run: uv pip list
- name: Run tests - name: Run tests
run: pytest tests/ --spawn-backend=${{ matrix.spawn_backend }} -rsx run: uv run pytest tests/ --ignore=tests/devx --spawn-backend=${{ matrix.spawn_backend }} -rsx
# We skip 3.10 on windows for now due to not having any collabs to # We skip 3.10 on windows for now due to not having any collabs to
# debug the CI failures. Anyone wanting to hack and solve them is very # debug the CI failures. Anyone wanting to hack and solve them is very

View File

@ -10,9 +10,5 @@ pkgs.mkShell {
inherit nativeBuildInputs; inherit nativeBuildInputs;
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath nativeBuildInputs; LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath nativeBuildInputs;
TMPDIR = "/tmp";
shellHook = ''
set -e
uv venv .venv --python=3.12
'';
} }

View File

@ -120,6 +120,7 @@ async def main(
break_parent_ipc_after: int|bool = False, break_parent_ipc_after: int|bool = False,
break_child_ipc_after: int|bool = False, break_child_ipc_after: int|bool = False,
pre_close: bool = False, pre_close: bool = False,
tpt_proto: str = 'tcp',
) -> None: ) -> None:
@ -131,6 +132,7 @@ async def main(
# a hang since it never engages due to broken IPC # a hang since it never engages due to broken IPC
debug_mode=debug_mode, debug_mode=debug_mode,
loglevel=loglevel, loglevel=loglevel,
enable_transports=[tpt_proto],
) as an, ) as an,
): ):
@ -145,7 +147,8 @@ async def main(
_testing.expect_ctxc( _testing.expect_ctxc(
yay=( yay=(
break_parent_ipc_after break_parent_ipc_after
or break_child_ipc_after or
break_child_ipc_after
), ),
# TODO: we CAN'T remove this right? # TODO: we CAN'T remove this right?
# since we need the ctxc to bubble up from either # since we need the ctxc to bubble up from either

View File

@ -9,7 +9,7 @@ async def main(service_name):
async with tractor.open_nursery() as an: async with tractor.open_nursery() as an:
await an.start_actor(service_name) await an.start_actor(service_name)
async with tractor.get_registry('127.0.0.1', 1616) as portal: async with tractor.get_registry() as portal:
print(f"Arbiter is listening on {portal.channel}") print(f"Arbiter is listening on {portal.channel}")
async with tractor.wait_for_actor(service_name) as sockaddr: async with tractor.wait_for_actor(service_name) as sockaddr:

View File

@ -46,6 +46,7 @@ dependencies = [
# typed IPC msging # typed IPC msging
"msgspec>=0.19.0", "msgspec>=0.19.0",
"cffi>=1.17.1", "cffi>=1.17.1",
"bidict>=0.23.1",
] ]
# ------ project ------ # ------ project ------
@ -63,6 +64,10 @@ dev = [
"pyperclip>=1.9.0", "pyperclip>=1.9.0",
"prompt-toolkit>=3.0.50", "prompt-toolkit>=3.0.50",
"xonsh>=0.19.2", "xonsh>=0.19.2",
"numpy>=2.2.4", # used for fast test sample gen
"mypy>=1.15.0",
"psutil>=7.0.0",
"trio-typing>=0.10.0",
] ]
# TODO, add these with sane versions; were originally in # TODO, add these with sane versions; were originally in
# `requirements-docs.txt`.. # `requirements-docs.txt`..

View File

@ -1,6 +1,8 @@
""" """
``tractor`` testing!! Top level of the testing suites!
""" """
from __future__ import annotations
import sys import sys
import subprocess import subprocess
import os import os
@ -30,7 +32,11 @@ else:
_KILL_SIGNAL = signal.SIGKILL _KILL_SIGNAL = signal.SIGKILL
_INT_SIGNAL = signal.SIGINT _INT_SIGNAL = signal.SIGINT
_INT_RETURN_CODE = 1 if sys.version_info < (3, 8) else -signal.SIGINT.value _INT_RETURN_CODE = 1 if sys.version_info < (3, 8) else -signal.SIGINT.value
_PROC_SPAWN_WAIT = 0.6 if sys.version_info < (3, 7) else 0.4 _PROC_SPAWN_WAIT = (
0.6
if sys.version_info < (3, 7)
else 0.4
)
no_windows = pytest.mark.skipif( no_windows = pytest.mark.skipif(
@ -39,7 +45,9 @@ no_windows = pytest.mark.skipif(
) )
def pytest_addoption(parser): def pytest_addoption(
parser: pytest.Parser,
):
parser.addoption( parser.addoption(
"--ll", "--ll",
action="store", action="store",
@ -56,7 +64,8 @@ def pytest_addoption(parser):
) )
parser.addoption( parser.addoption(
"--tpdb", "--debug-mode", "--tpdb",
"--debug-mode",
action="store_true", action="store_true",
dest='tractor_debug_mode', dest='tractor_debug_mode',
# default=False, # default=False,
@ -67,6 +76,17 @@ def pytest_addoption(parser):
), ),
) )
# provide which IPC transport protocols opting-in test suites
# should accumulatively run against.
parser.addoption(
"--tpt-proto",
nargs='+', # accumulate-multiple-args
action="store",
dest='tpt_protos',
default=['tcp'],
help="Transport protocol to use under the `tractor.ipc.Channel`",
)
def pytest_configure(config): def pytest_configure(config):
backend = config.option.spawn_backend backend = config.option.spawn_backend
@ -74,7 +94,7 @@ def pytest_configure(config):
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def debug_mode(request): def debug_mode(request) -> bool:
debug_mode: bool = request.config.option.tractor_debug_mode debug_mode: bool = request.config.option.tractor_debug_mode
# if debug_mode: # if debug_mode:
# breakpoint() # breakpoint()
@ -95,11 +115,43 @@ def spawn_backend(request) -> str:
return request.config.option.spawn_backend return request.config.option.spawn_backend
# @pytest.fixture(scope='function', autouse=True) @pytest.fixture(scope='session')
# def debug_enabled(request) -> str: def tpt_protos(request) -> list[str]:
# from tractor import _state
# if _state._runtime_vars['_debug_mode']: # allow quoting on CLI
# breakpoint() proto_keys: list[str] = [
proto_key.replace('"', '').replace("'", "")
for proto_key in request.config.option.tpt_protos
]
# ?TODO, eventually support multiple protos per test-sesh?
if len(proto_keys) > 1:
pytest.fail(
'We only support one `--tpt-proto <key>` atm!\n'
)
# XXX ensure we support the protocol by name via lookup!
for proto_key in proto_keys:
addr_type = tractor._addr._address_types[proto_key]
assert addr_type.proto_key == proto_key
yield proto_keys
@pytest.fixture(
scope='session',
autouse=True,
)
def tpt_proto(
tpt_protos: list[str],
) -> str:
proto_key: str = tpt_protos[0]
from tractor import _state
if _state._def_tpt_proto != proto_key:
_state._def_tpt_proto = proto_key
# breakpoint()
yield proto_key
_ci_env: bool = os.environ.get('CI', False) _ci_env: bool = os.environ.get('CI', False)
@ -107,7 +159,7 @@ _ci_env: bool = os.environ.get('CI', False)
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def ci_env() -> bool: def ci_env() -> bool:
''' '''
Detect CI envoirment. Detect CI environment.
''' '''
return _ci_env return _ci_env
@ -115,30 +167,45 @@ def ci_env() -> bool:
# TODO: also move this to `._testing` for now? # TODO: also move this to `._testing` for now?
# -[ ] possibly generalize and re-use for multi-tree spawning # -[ ] possibly generalize and re-use for multi-tree spawning
# along with the new stuff for multi-addrs in distribute_dis # along with the new stuff for multi-addrs?
# branch?
# #
# choose randomly at import time # choose random port at import time
_reg_addr: tuple[str, int] = ( _rando_port: str = random.randint(1000, 9999)
'127.0.0.1',
random.randint(1000, 9999),
)
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def reg_addr() -> tuple[str, int]: def reg_addr(
tpt_proto: str,
) -> tuple[str, int|str]:
# globally override the runtime to the per-test-session-dynamic # globally override the runtime to the per-test-session-dynamic
# addr so that all tests never conflict with any other actor # addr so that all tests never conflict with any other actor
# tree using the default. # tree using the default.
from tractor import _root from tractor import (
_root._default_lo_addrs = [_reg_addr] _addr,
)
addr_type = _addr._address_types[tpt_proto]
def_reg_addr: tuple[str, int] = _addr._default_lo_addrs[tpt_proto]
return _reg_addr testrun_reg_addr: tuple[str, int]
match tpt_proto:
case 'tcp':
testrun_reg_addr = (
addr_type.def_bindspace,
_rando_port,
)
# NOTE, file-name uniqueness (no-collisions) will be based on
# the runtime-directory and root (pytest-proc's) pid.
case 'uds':
testrun_reg_addr = addr_type.get_random().unwrap()
assert def_reg_addr != testrun_reg_addr
return testrun_reg_addr
def pytest_generate_tests(metafunc): def pytest_generate_tests(metafunc):
spawn_backend = metafunc.config.option.spawn_backend spawn_backend: str = metafunc.config.option.spawn_backend
if not spawn_backend: if not spawn_backend:
# XXX some weird windows bug with `pytest`? # XXX some weird windows bug with `pytest`?
@ -151,45 +218,53 @@ def pytest_generate_tests(metafunc):
'trio', 'trio',
) )
# NOTE: used to be used to dyanmically parametrize tests for when # NOTE: used-to-be-used-to dyanmically parametrize tests for when
# you just passed --spawn-backend=`mp` on the cli, but now we expect # you just passed --spawn-backend=`mp` on the cli, but now we expect
# that cli input to be manually specified, BUT, maybe we'll do # that cli input to be manually specified, BUT, maybe we'll do
# something like this again in the future? # something like this again in the future?
if 'start_method' in metafunc.fixturenames: if 'start_method' in metafunc.fixturenames:
metafunc.parametrize("start_method", [spawn_backend], scope='module') metafunc.parametrize(
"start_method",
[spawn_backend],
scope='module',
)
# TODO, parametrize any `tpt_proto: str` declaring tests!
# proto_tpts: list[str] = metafunc.config.option.proto_tpts
# if 'tpt_proto' in metafunc.fixturenames:
# metafunc.parametrize(
# 'tpt_proto',
# proto_tpts, # TODO, double check this list usage!
# scope='module',
# )
# TODO: a way to let test scripts (like from `examples/`) def sig_prog(
# guarantee they won't registry addr collide! proc: subprocess.Popen,
# @pytest.fixture sig: int,
# def open_test_runtime( canc_timeout: float = 0.1,
# reg_addr: tuple, ) -> int:
# ) -> AsyncContextManager:
# return partial(
# tractor.open_nursery,
# registry_addrs=[reg_addr],
# )
def sig_prog(proc, sig):
"Kill the actor-process with ``sig``." "Kill the actor-process with ``sig``."
proc.send_signal(sig) proc.send_signal(sig)
time.sleep(0.1) time.sleep(canc_timeout)
if not proc.poll(): if not proc.poll():
# TODO: why sometimes does SIGINT not work on teardown? # TODO: why sometimes does SIGINT not work on teardown?
# seems to happen only when trace logging enabled? # seems to happen only when trace logging enabled?
proc.send_signal(_KILL_SIGNAL) proc.send_signal(_KILL_SIGNAL)
ret = proc.wait() ret: int = proc.wait()
assert ret assert ret
# TODO: factor into @cm and move to `._testing`? # TODO: factor into @cm and move to `._testing`?
@pytest.fixture @pytest.fixture
def daemon( def daemon(
debug_mode: bool,
loglevel: str, loglevel: str,
testdir, testdir,
reg_addr: tuple[str, int], reg_addr: tuple[str, int],
): tpt_proto: str,
) -> subprocess.Popen:
''' '''
Run a daemon root actor as a separate actor-process tree and Run a daemon root actor as a separate actor-process tree and
"remote registrar" for discovery-protocol related tests. "remote registrar" for discovery-protocol related tests.
@ -201,27 +276,99 @@ def daemon(
code: str = ( code: str = (
"import tractor; " "import tractor; "
"tractor.run_daemon([], registry_addrs={reg_addrs}, loglevel={ll})" "tractor.run_daemon([], "
"registry_addrs={reg_addrs}, "
"debug_mode={debug_mode}, "
"loglevel={ll})"
).format( ).format(
reg_addrs=str([reg_addr]), reg_addrs=str([reg_addr]),
ll="'{}'".format(loglevel) if loglevel else None, ll="'{}'".format(loglevel) if loglevel else None,
debug_mode=debug_mode,
) )
cmd: list[str] = [ cmd: list[str] = [
sys.executable, sys.executable,
'-c', code, '-c', code,
] ]
# breakpoint()
kwargs = {} kwargs = {}
if platform.system() == 'Windows': if platform.system() == 'Windows':
# without this, tests hang on windows forever # without this, tests hang on windows forever
kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
proc = testdir.popen( proc: subprocess.Popen = testdir.popen(
cmd, cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs, **kwargs,
) )
assert not proc.returncode
# UDS sockets are **really** fast to bind()/listen()/connect()
# so it's often required that we delay a bit more starting
# the first actor-tree..
if tpt_proto == 'uds':
global _PROC_SPAWN_WAIT
_PROC_SPAWN_WAIT = 0.6
time.sleep(_PROC_SPAWN_WAIT) time.sleep(_PROC_SPAWN_WAIT)
assert not proc.returncode
yield proc yield proc
sig_prog(proc, _INT_SIGNAL) sig_prog(proc, _INT_SIGNAL)
# XXX! yeah.. just be reaaal careful with this bc sometimes it
# can lock up on the `_io.BufferedReader` and hang..
stderr: str = proc.stderr.read().decode()
if stderr:
print(
f'Daemon actor tree produced STDERR:\n'
f'{proc.args}\n'
f'\n'
f'{stderr}\n'
)
if proc.returncode != -2:
raise RuntimeError(
'Daemon actor tree failed !?\n'
f'{proc.args}\n'
)
# @pytest.fixture(autouse=True)
# def shared_last_failed(pytestconfig):
# val = pytestconfig.cache.get("example/value", None)
# breakpoint()
# if val is None:
# pytestconfig.cache.set("example/value", val)
# return val
# TODO: a way to let test scripts (like from `examples/`)
# guarantee they won't `registry_addrs` collide!
# -[ ] maybe use some kinda standard `def main()` arg-spec that
# we can introspect from a fixture that is called from the test
# body?
# -[ ] test and figure out typing for below prototype! Bp
#
# @pytest.fixture
# def set_script_runtime_args(
# reg_addr: tuple,
# ) -> Callable[[...], None]:
# def import_n_partial_in_args_n_triorun(
# script: Path, # under examples?
# **runtime_args,
# ) -> Callable[[], Any]: # a `partial`-ed equiv of `trio.run()`
# # NOTE, below is taken from
# # `.test_advanced_faults.test_ipc_channel_break_during_stream`
# mod: ModuleType = import_path(
# examples_dir() / 'advanced_faults'
# / 'ipc_failure_during_stream.py',
# root=examples_dir(),
# consider_namespace_packages=False,
# )
# return partial(
# trio.run,
# partial(
# mod.main,
# **runtime_args,
# )
# )
# return import_n_partial_in_args_n_triorun

View File

@ -10,6 +10,9 @@ import pytest
from _pytest.pathlib import import_path from _pytest.pathlib import import_path
import trio import trio
import tractor import tractor
from tractor import (
TransportClosed,
)
from tractor._testing import ( from tractor._testing import (
examples_dir, examples_dir,
break_ipc, break_ipc,
@ -74,6 +77,7 @@ def test_ipc_channel_break_during_stream(
spawn_backend: str, spawn_backend: str,
ipc_break: dict|None, ipc_break: dict|None,
pre_aclose_msgstream: bool, pre_aclose_msgstream: bool,
tpt_proto: str,
): ):
''' '''
Ensure we can have an IPC channel break its connection during Ensure we can have an IPC channel break its connection during
@ -91,7 +95,7 @@ def test_ipc_channel_break_during_stream(
# non-`trio` spawners should never hit the hang condition that # non-`trio` spawners should never hit the hang condition that
# requires the user to do ctl-c to cancel the actor tree. # requires the user to do ctl-c to cancel the actor tree.
# expect_final_exc = trio.ClosedResourceError # expect_final_exc = trio.ClosedResourceError
expect_final_exc = tractor.TransportClosed expect_final_exc = TransportClosed
mod: ModuleType = import_path( mod: ModuleType = import_path(
examples_dir() / 'advanced_faults' examples_dir() / 'advanced_faults'
@ -104,6 +108,8 @@ def test_ipc_channel_break_during_stream(
# period" wherein the user eventually hits ctl-c to kill the # period" wherein the user eventually hits ctl-c to kill the
# root-actor tree. # root-actor tree.
expect_final_exc: BaseException = KeyboardInterrupt expect_final_exc: BaseException = KeyboardInterrupt
expect_final_cause: BaseException|None = None
if ( if (
# only expect EoC if trans is broken on the child side, # only expect EoC if trans is broken on the child side,
ipc_break['break_child_ipc_after'] is not False ipc_break['break_child_ipc_after'] is not False
@ -138,6 +144,9 @@ def test_ipc_channel_break_during_stream(
# a user sending ctl-c by raising a KBI. # a user sending ctl-c by raising a KBI.
if pre_aclose_msgstream: if pre_aclose_msgstream:
expect_final_exc = KeyboardInterrupt expect_final_exc = KeyboardInterrupt
if tpt_proto == 'uds':
expect_final_exc = TransportClosed
expect_final_cause = trio.BrokenResourceError
# XXX OLD XXX # XXX OLD XXX
# if child calls `MsgStream.aclose()` then expect EoC. # if child calls `MsgStream.aclose()` then expect EoC.
@ -157,6 +166,10 @@ def test_ipc_channel_break_during_stream(
if pre_aclose_msgstream: if pre_aclose_msgstream:
expect_final_exc = KeyboardInterrupt expect_final_exc = KeyboardInterrupt
if tpt_proto == 'uds':
expect_final_exc = TransportClosed
expect_final_cause = trio.BrokenResourceError
# NOTE when the parent IPC side dies (even if the child does as well # NOTE when the parent IPC side dies (even if the child does as well
# but the child fails BEFORE the parent) we always expect the # but the child fails BEFORE the parent) we always expect the
# IPC layer to raise a closed-resource, NEVER do we expect # IPC layer to raise a closed-resource, NEVER do we expect
@ -169,8 +182,8 @@ def test_ipc_channel_break_during_stream(
and and
ipc_break['break_child_ipc_after'] is False ipc_break['break_child_ipc_after'] is False
): ):
# expect_final_exc = trio.ClosedResourceError
expect_final_exc = tractor.TransportClosed expect_final_exc = tractor.TransportClosed
expect_final_cause = trio.ClosedResourceError
# BOTH but, PARENT breaks FIRST # BOTH but, PARENT breaks FIRST
elif ( elif (
@ -181,8 +194,8 @@ def test_ipc_channel_break_during_stream(
ipc_break['break_parent_ipc_after'] ipc_break['break_parent_ipc_after']
) )
): ):
# expect_final_exc = trio.ClosedResourceError
expect_final_exc = tractor.TransportClosed expect_final_exc = tractor.TransportClosed
expect_final_cause = trio.ClosedResourceError
with pytest.raises( with pytest.raises(
expected_exception=( expected_exception=(
@ -198,6 +211,7 @@ def test_ipc_channel_break_during_stream(
start_method=spawn_backend, start_method=spawn_backend,
loglevel=loglevel, loglevel=loglevel,
pre_close=pre_aclose_msgstream, pre_close=pre_aclose_msgstream,
tpt_proto=tpt_proto,
**ipc_break, **ipc_break,
) )
) )
@ -220,10 +234,15 @@ def test_ipc_channel_break_during_stream(
) )
cause: Exception = tc.__cause__ cause: Exception = tc.__cause__
assert ( assert (
type(cause) is trio.ClosedResourceError # type(cause) is trio.ClosedResourceError
and type(cause) is expect_final_cause
cause.args[0] == 'another task closed this fd'
# TODO, should we expect a certain exc-message (per
# tpt) as well??
# and
# cause.args[0] == 'another task closed this fd'
) )
raise raise
# get raw instance from pytest wrapper # get raw instance from pytest wrapper

View File

@ -7,7 +7,9 @@ import platform
from functools import partial from functools import partial
import itertools import itertools
import psutil
import pytest import pytest
import subprocess
import tractor import tractor
from tractor._testing import tractor_test from tractor._testing import tractor_test
import trio import trio
@ -26,7 +28,7 @@ async def test_reg_then_unreg(reg_addr):
portal = await n.start_actor('actor', enable_modules=[__name__]) portal = await n.start_actor('actor', enable_modules=[__name__])
uid = portal.channel.uid uid = portal.channel.uid
async with tractor.get_registry(*reg_addr) as aportal: async with tractor.get_registry(reg_addr) as aportal:
# this local actor should be the arbiter # this local actor should be the arbiter
assert actor is aportal.actor assert actor is aportal.actor
@ -152,15 +154,25 @@ async def unpack_reg(actor_or_portal):
async def spawn_and_check_registry( async def spawn_and_check_registry(
reg_addr: tuple, reg_addr: tuple,
use_signal: bool, use_signal: bool,
debug_mode: bool = False,
remote_arbiter: bool = False, remote_arbiter: bool = False,
with_streaming: bool = False, with_streaming: bool = False,
maybe_daemon: tuple[
subprocess.Popen,
psutil.Process,
]|None = None,
) -> None: ) -> None:
if maybe_daemon:
popen, proc = maybe_daemon
# breakpoint()
async with tractor.open_root_actor( async with tractor.open_root_actor(
registry_addrs=[reg_addr], registry_addrs=[reg_addr],
debug_mode=debug_mode,
): ):
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
# runtime needs to be up to call this # runtime needs to be up to call this
actor = tractor.current_actor() actor = tractor.current_actor()
@ -176,11 +188,11 @@ async def spawn_and_check_registry(
extra = 2 # local root actor + remote arbiter extra = 2 # local root actor + remote arbiter
# ensure current actor is registered # ensure current actor is registered
registry = await get_reg() registry: dict = await get_reg()
assert actor.uid in registry assert actor.uid in registry
try: try:
async with tractor.open_nursery() as n: async with tractor.open_nursery() as an:
async with trio.open_nursery( async with trio.open_nursery(
strict_exception_groups=False, strict_exception_groups=False,
) as trion: ) as trion:
@ -189,17 +201,17 @@ async def spawn_and_check_registry(
for i in range(3): for i in range(3):
name = f'a{i}' name = f'a{i}'
if with_streaming: if with_streaming:
portals[name] = await n.start_actor( portals[name] = await an.start_actor(
name=name, enable_modules=[__name__]) name=name, enable_modules=[__name__])
else: # no streaming else: # no streaming
portals[name] = await n.run_in_actor( portals[name] = await an.run_in_actor(
trio.sleep_forever, name=name) trio.sleep_forever, name=name)
# wait on last actor to come up # wait on last actor to come up
async with tractor.wait_for_actor(name): async with tractor.wait_for_actor(name):
registry = await get_reg() registry = await get_reg()
for uid in n._children: for uid in an._children:
assert uid in registry assert uid in registry
assert len(portals) + extra == len(registry) assert len(portals) + extra == len(registry)
@ -232,6 +244,7 @@ async def spawn_and_check_registry(
@pytest.mark.parametrize('use_signal', [False, True]) @pytest.mark.parametrize('use_signal', [False, True])
@pytest.mark.parametrize('with_streaming', [False, True]) @pytest.mark.parametrize('with_streaming', [False, True])
def test_subactors_unregister_on_cancel( def test_subactors_unregister_on_cancel(
debug_mode: bool,
start_method, start_method,
use_signal, use_signal,
reg_addr, reg_addr,
@ -248,6 +261,7 @@ def test_subactors_unregister_on_cancel(
spawn_and_check_registry, spawn_and_check_registry,
reg_addr, reg_addr,
use_signal, use_signal,
debug_mode=debug_mode,
remote_arbiter=False, remote_arbiter=False,
with_streaming=with_streaming, with_streaming=with_streaming,
), ),
@ -257,7 +271,8 @@ def test_subactors_unregister_on_cancel(
@pytest.mark.parametrize('use_signal', [False, True]) @pytest.mark.parametrize('use_signal', [False, True])
@pytest.mark.parametrize('with_streaming', [False, True]) @pytest.mark.parametrize('with_streaming', [False, True])
def test_subactors_unregister_on_cancel_remote_daemon( def test_subactors_unregister_on_cancel_remote_daemon(
daemon, daemon: subprocess.Popen,
debug_mode: bool,
start_method, start_method,
use_signal, use_signal,
reg_addr, reg_addr,
@ -273,8 +288,13 @@ def test_subactors_unregister_on_cancel_remote_daemon(
spawn_and_check_registry, spawn_and_check_registry,
reg_addr, reg_addr,
use_signal, use_signal,
debug_mode=debug_mode,
remote_arbiter=True, remote_arbiter=True,
with_streaming=with_streaming, with_streaming=with_streaming,
maybe_daemon=(
daemon,
psutil.Process(daemon.pid)
),
), ),
) )
@ -300,7 +320,7 @@ async def close_chans_before_nursery(
async with tractor.open_root_actor( async with tractor.open_root_actor(
registry_addrs=[reg_addr], registry_addrs=[reg_addr],
): ):
async with tractor.get_registry(*reg_addr) as aportal: async with tractor.get_registry(reg_addr) as aportal:
try: try:
get_reg = partial(unpack_reg, aportal) get_reg = partial(unpack_reg, aportal)
@ -373,7 +393,7 @@ def test_close_channel_explicit(
@pytest.mark.parametrize('use_signal', [False, True]) @pytest.mark.parametrize('use_signal', [False, True])
def test_close_channel_explicit_remote_arbiter( def test_close_channel_explicit_remote_arbiter(
daemon, daemon: subprocess.Popen,
start_method, start_method,
use_signal, use_signal,
reg_addr, reg_addr,

View File

@ -66,6 +66,9 @@ def run_example_in_subproc(
# due to backpressure!!! # due to backpressure!!!
proc = testdir.popen( proc = testdir.popen(
cmdargs, cmdargs,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs, **kwargs,
) )
assert not proc.returncode assert not proc.returncode
@ -119,10 +122,14 @@ def test_example(
code = ex.read() code = ex.read()
with run_example_in_subproc(code) as proc: with run_example_in_subproc(code) as proc:
proc.wait() err = None
err, _ = proc.stderr.read(), proc.stdout.read() try:
# print(f'STDERR: {err}') if not proc.poll():
# print(f'STDOUT: {out}') _, err = proc.communicate(timeout=15)
except subprocess.TimeoutExpired as e:
proc.kill()
err = e.stderr
# if we get some gnarly output let's aggregate and raise # if we get some gnarly output let's aggregate and raise
if err: if err:

View File

@ -0,0 +1,66 @@
import trio
import pytest
from tractor.linux.eventfd import (
open_eventfd,
EFDReadCancelled,
EventFD
)
def test_read_cancellation():
'''
Ensure EventFD.read raises EFDReadCancelled if EventFD.close()
is called.
'''
fd = open_eventfd()
async def bg_read(event: EventFD):
with pytest.raises(EFDReadCancelled):
await event.read()
async def main():
async with trio.open_nursery() as n:
with (
EventFD(fd, 'w') as event,
trio.fail_after(3)
):
n.start_soon(bg_read, event)
await trio.sleep(0.2)
event.close()
trio.run(main)
def test_read_trio_semantics():
'''
Ensure EventFD.read raises trio.ClosedResourceError and
trio.BusyResourceError.
'''
fd = open_eventfd()
async def bg_read(event: EventFD):
try:
await event.read()
except EFDReadCancelled:
...
async def main():
async with trio.open_nursery() as n:
# start background read and attempt
# foreground read, should be busy
with EventFD(fd, 'w') as event:
n.start_soon(bg_read, event)
await trio.sleep(0.2)
with pytest.raises(trio.BusyResourceError):
await event.read()
# attempt read after close
with pytest.raises(trio.ClosedResourceError):
await event.read()
trio.run(main)

View File

@ -5,6 +5,7 @@ Low-level functional audits for our
B~) B~)
''' '''
from __future__ import annotations
from contextlib import ( from contextlib import (
contextmanager as cm, contextmanager as cm,
# nullcontext, # nullcontext,
@ -20,7 +21,7 @@ from msgspec import (
# structs, # structs,
# msgpack, # msgpack,
Raw, Raw,
# Struct, Struct,
ValidationError, ValidationError,
) )
import pytest import pytest
@ -46,6 +47,11 @@ from tractor.msg import (
apply_codec, apply_codec,
current_codec, current_codec,
) )
from tractor.msg._codec import (
default_builtins,
mk_dec_hook,
mk_codec_from_spec,
)
from tractor.msg.types import ( from tractor.msg.types import (
log, log,
Started, Started,
@ -743,6 +749,143 @@ def test_ext_types_over_ipc(
assert exc.boxed_type is TypeError assert exc.boxed_type is TypeError
'''
Test the auto enc & dec hooks
Create a codec which will work for:
- builtins
- custom types
- lists of custom types
'''
class BytesTestClass(Struct, tag=True):
raw: bytes
def encode(self) -> bytes:
return self.raw
@classmethod
def from_bytes(self, raw: bytes) -> BytesTestClass:
return BytesTestClass(raw=raw)
class StrTestClass(Struct, tag=True):
s: str
def encode(self) -> str:
return self.s
@classmethod
def from_str(self, s: str) -> StrTestClass:
return StrTestClass(s=s)
class IntTestClass(Struct, tag=True):
num: int
def encode(self) -> int:
return self.num
@classmethod
def from_int(self, num: int) -> IntTestClass:
return IntTestClass(num=num)
builtins = tuple((
builtin
for builtin in default_builtins
if builtin is not list
))
TestClasses = (BytesTestClass, StrTestClass, IntTestClass)
TestSpec = (
*TestClasses, list[Union[*TestClasses]]
)
test_codec = mk_codec_from_spec(
spec=TestSpec
)
@tractor.context
async def child_custom_codec(
ctx: tractor.Context,
msgs: list[Union[*TestSpec]],
):
'''
Apply codec and send all msgs passed through stream
'''
with (
apply_codec(test_codec),
limit_plds(
test_codec.pld_spec,
dec_hook=mk_dec_hook(TestSpec),
ext_types=TestSpec + builtins
),
):
await ctx.started(None)
async with ctx.open_stream() as stream:
for msg in msgs:
await stream.send(msg)
def test_multi_custom_codec():
'''
Open subactor setup codec and pld_rx and wait to receive & assert from
stream
'''
msgs = [
None,
True, False,
0xdeadbeef,
.42069,
b'deadbeef',
BytesTestClass(raw=b'deadbeef'),
StrTestClass(s='deadbeef'),
IntTestClass(num=0xdeadbeef),
[
BytesTestClass(raw=b'deadbeef'),
StrTestClass(s='deadbeef'),
IntTestClass(num=0xdeadbeef),
]
]
async def main():
async with tractor.open_nursery() as an:
p: tractor.Portal = await an.start_actor(
'child',
enable_modules=[__name__],
)
async with (
p.open_context(
child_custom_codec,
msgs=msgs,
) as (ctx, _),
ctx.open_stream() as ipc
):
with (
apply_codec(test_codec),
limit_plds(
test_codec.pld_spec,
dec_hook=mk_dec_hook(TestSpec),
ext_types=TestSpec + builtins
)
):
msg_iter = iter(msgs)
async for recv_msg in ipc:
assert recv_msg == next(msg_iter)
await p.cancel_actor()
trio.run(main)
# def chk_pld_type( # def chk_pld_type(
# payload_spec: Type[Struct]|Any, # payload_spec: Type[Struct]|Any,
# pld: Any, # pld: Any,

View File

@ -871,7 +871,7 @@ async def serve_subactors(
) )
await ipc.send(( await ipc.send((
peer.chan.uid, peer.chan.uid,
peer.chan.raddr, peer.chan.raddr.unwrap(),
)) ))
print('Spawner exiting spawn serve loop!') print('Spawner exiting spawn serve loop!')

View File

@ -38,7 +38,7 @@ async def test_self_is_registered_localportal(reg_addr):
"Verify waiting on the arbiter to register itself using a local portal." "Verify waiting on the arbiter to register itself using a local portal."
actor = tractor.current_actor() actor = tractor.current_actor()
assert actor.is_arbiter assert actor.is_arbiter
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
assert isinstance(portal, tractor._portal.LocalPortal) assert isinstance(portal, tractor._portal.LocalPortal)
with trio.fail_after(0.2): with trio.fail_after(0.2):

View File

@ -32,7 +32,7 @@ def test_abort_on_sigint(daemon):
@tractor_test @tractor_test
async def test_cancel_remote_arbiter(daemon, reg_addr): async def test_cancel_remote_arbiter(daemon, reg_addr):
assert not tractor.current_actor().is_arbiter assert not tractor.current_actor().is_arbiter
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
await portal.cancel_actor() await portal.cancel_actor()
time.sleep(0.1) time.sleep(0.1)
@ -41,7 +41,7 @@ async def test_cancel_remote_arbiter(daemon, reg_addr):
# no arbiter socket should exist # no arbiter socket should exist
with pytest.raises(OSError): with pytest.raises(OSError):
async with tractor.get_registry(*reg_addr) as portal: async with tractor.get_registry(reg_addr) as portal:
pass pass

View File

@ -100,16 +100,29 @@ async def streamer(
@acm @acm
async def open_stream() -> Awaitable[tractor.MsgStream]: async def open_stream() -> Awaitable[tractor.MsgStream]:
async with tractor.open_nursery() as tn: try:
portal = await tn.start_actor('streamer', enable_modules=[__name__]) async with tractor.open_nursery() as an:
portal = await an.start_actor(
'streamer',
enable_modules=[__name__],
)
async with ( async with (
portal.open_context(streamer) as (ctx, first), portal.open_context(streamer) as (ctx, first),
ctx.open_stream() as stream, ctx.open_stream() as stream,
): ):
yield stream yield stream
print('Cancelling streamer')
await portal.cancel_actor() await portal.cancel_actor()
print('CANCELLED STREAMER') print('Cancelled streamer')
except Exception as err:
print(
f'`open_stream()` errored?\n'
f'{err!r}\n'
)
await tractor.pause(shield=True)
raise err
@acm @acm
@ -132,19 +145,28 @@ async def maybe_open_stream(taskname: str):
yield stream yield stream
def test_open_local_sub_to_stream(): def test_open_local_sub_to_stream(
debug_mode: bool,
):
''' '''
Verify a single inter-actor stream can can be fanned-out shared to Verify a single inter-actor stream can can be fanned-out shared to
N local tasks using ``trionics.maybe_open_context():``. N local tasks using `trionics.maybe_open_context()`.
''' '''
timeout: float = 3.6 if platform.system() != "Windows" else 10 timeout: float = 3.6
if platform.system() == "Windows":
timeout: float = 10
if debug_mode:
timeout = 999
async def main(): async def main():
full = list(range(1000)) full = list(range(1000))
async def get_sub_and_pull(taskname: str): async def get_sub_and_pull(taskname: str):
stream: tractor.MsgStream
async with ( async with (
maybe_open_stream(taskname) as stream, maybe_open_stream(taskname) as stream,
): ):
@ -165,17 +187,27 @@ def test_open_local_sub_to_stream():
assert set(seq).issubset(set(full)) assert set(seq).issubset(set(full))
print(f'{taskname} finished') print(f'{taskname} finished')
with trio.fail_after(timeout): with trio.fail_after(timeout) as cs:
# TODO: turns out this isn't multi-task entrant XD # TODO: turns out this isn't multi-task entrant XD
# We probably need an indepotent entry semantic? # We probably need an indepotent entry semantic?
async with tractor.open_root_actor(): async with tractor.open_root_actor(
debug_mode=debug_mode,
):
async with ( async with (
trio.open_nursery() as nurse, trio.open_nursery() as tn,
): ):
for i in range(10): for i in range(10):
nurse.start_soon(get_sub_and_pull, f'task_{i}') tn.start_soon(
get_sub_and_pull,
f'task_{i}',
)
await trio.sleep(0.001) await trio.sleep(0.001)
print('all consumer tasks finished') print('all consumer tasks finished')
if cs.cancelled_caught:
pytest.fail(
'Should NOT time out in `open_root_actor()` ?'
)
trio.run(main) trio.run(main)

View File

@ -0,0 +1,185 @@
from typing import AsyncContextManager
from contextlib import asynccontextmanager as acm
import trio
import pytest
import tractor
from tractor.trionics import gather_contexts
from tractor.ipc._ringbuf import open_ringbufs
from tractor.ipc._ringbuf._pubsub import (
open_ringbuf_publisher,
open_ringbuf_subscriber,
get_publisher,
get_subscriber,
open_pub_channel_at,
open_sub_channel_at
)
log = tractor.log.get_console_log(level='info')
@tractor.context
async def publish_range(
ctx: tractor.Context,
size: int
):
pub = get_publisher()
await ctx.started()
for i in range(size):
await pub.send(i.to_bytes(4))
log.info(f'sent {i}')
await pub.flush()
log.info('range done')
@tractor.context
async def subscribe_range(
ctx: tractor.Context,
size: int
):
sub = get_subscriber()
await ctx.started()
for i in range(size):
recv = int.from_bytes(await sub.receive())
if recv != i:
raise AssertionError(
f'received: {recv} expected: {i}'
)
log.info(f'received: {recv}')
log.info('range done')
@tractor.context
async def subscriber_child(ctx: tractor.Context):
try:
async with open_ringbuf_subscriber(guarantee_order=True):
await ctx.started()
await trio.sleep_forever()
finally:
log.info('subscriber exit')
@tractor.context
async def publisher_child(
ctx: tractor.Context,
batch_size: int
):
try:
async with open_ringbuf_publisher(
guarantee_order=True,
batch_size=batch_size
):
await ctx.started()
await trio.sleep_forever()
finally:
log.info('publisher exit')
@acm
async def open_pubsub_test_actors(
ring_names: list[str],
size: int,
batch_size: int
) -> AsyncContextManager[tuple[tractor.Portal, tractor.Portal]]:
with trio.fail_after(5):
async with tractor.open_nursery(
enable_modules=[
'tractor.linux._fdshare'
]
) as an:
modules = [
__name__,
'tractor.linux._fdshare',
'tractor.ipc._ringbuf._pubsub'
]
sub_portal = await an.start_actor(
'sub',
enable_modules=modules
)
pub_portal = await an.start_actor(
'pub',
enable_modules=modules
)
async with (
sub_portal.open_context(subscriber_child) as (long_rctx, _),
pub_portal.open_context(
publisher_child,
batch_size=batch_size
) as (long_sctx, _),
open_ringbufs(ring_names) as tokens,
gather_contexts([
open_sub_channel_at('sub', ring)
for ring in tokens
]),
gather_contexts([
open_pub_channel_at('pub', ring)
for ring in tokens
]),
sub_portal.open_context(subscribe_range, size=size) as (rctx, _),
pub_portal.open_context(publish_range, size=size) as (sctx, _)
):
yield
await rctx.wait_for_result()
await sctx.wait_for_result()
await long_sctx.cancel()
await long_rctx.cancel()
await an.cancel()
@pytest.mark.parametrize(
('ring_names', 'size', 'batch_size'),
[
(
['ring-first'],
100,
1
),
(
['ring-first'],
69,
1
),
(
[f'multi-ring-{i}' for i in range(3)],
1000,
100
),
],
ids=[
'simple',
'redo-simple',
'multi-ring',
]
)
def test_pubsub(
request,
ring_names: list[str],
size: int,
batch_size: int
):
async def main():
async with open_pubsub_test_actors(
ring_names, size, batch_size
):
...
trio.run(main)

View File

@ -1,35 +1,51 @@
import time import time
import hashlib
import trio import trio
import pytest import pytest
import tractor import tractor
from tractor.ipc import ( from tractor.ipc._ringbuf import (
open_ringbuf, open_ringbuf,
open_ringbuf_pair,
attach_to_ringbuf_receiver,
attach_to_ringbuf_sender,
attach_to_ringbuf_channel,
RBToken, RBToken,
RingBuffSender,
RingBuffReceiver
) )
from tractor._testing.samples import generate_sample_messages from tractor._testing.samples import (
generate_single_byte_msgs,
RandomBytesGenerator
)
@tractor.context @tractor.context
async def child_read_shm( async def child_read_shm(
ctx: tractor.Context, ctx: tractor.Context,
msg_amount: int,
token: RBToken, token: RBToken,
total_bytes: int, ) -> str:
) -> None: '''
recvd_bytes = 0 Sub-actor used in `test_ringbuf`.
await ctx.started()
start_ts = time.time()
async with RingBuffReceiver(token) as receiver:
while recvd_bytes < total_bytes:
msg = await receiver.receive_some()
recvd_bytes += len(msg)
# make sure we dont hold any memoryviews Attach to a ringbuf and receive all messages until end of stream.
# before the ctx manager aclose() Keep track of how many bytes received and also calculate
msg = None sha256 of the whole byte stream.
Calculate and print performance stats, finally return calculated
hash.
'''
await ctx.started()
print('reader started')
msg_amount = 0
recvd_bytes = 0
recvd_hash = hashlib.sha256()
start_ts = time.time()
async with attach_to_ringbuf_receiver(token) as receiver:
async for msg in receiver:
msg_amount += 1
recvd_hash.update(msg)
recvd_bytes += len(msg)
end_ts = time.time() end_ts = time.time()
elapsed = end_ts - start_ts elapsed = end_ts - start_ts
@ -38,6 +54,10 @@ async def child_read_shm(
print(f'\n\telapsed ms: {elapsed_ms}') print(f'\n\telapsed ms: {elapsed_ms}')
print(f'\tmsg/sec: {int(msg_amount / elapsed):,}') print(f'\tmsg/sec: {int(msg_amount / elapsed):,}')
print(f'\tbytes/sec: {int(recvd_bytes / elapsed):,}') print(f'\tbytes/sec: {int(recvd_bytes / elapsed):,}')
print(f'\treceived msgs: {msg_amount:,}')
print(f'\treceived bytes: {recvd_bytes:,}')
return recvd_hash.hexdigest()
@tractor.context @tractor.context
@ -46,17 +66,37 @@ async def child_write_shm(
msg_amount: int, msg_amount: int,
rand_min: int, rand_min: int,
rand_max: int, rand_max: int,
token: RBToken, buf_size: int
) -> None: ) -> None:
msgs, total_bytes = generate_sample_messages( '''
Sub-actor used in `test_ringbuf`
Generate `msg_amount` payloads with
`random.randint(rand_min, rand_max)` random bytes at the end,
Calculate sha256 hash and send it to parent on `ctx.started`.
Attach to ringbuf and send all generated messages.
'''
rng = RandomBytesGenerator(
msg_amount, msg_amount,
rand_min=rand_min, rand_min=rand_min,
rand_max=rand_max, rand_max=rand_max,
) )
await ctx.started(total_bytes) async with (
async with RingBuffSender(token) as sender: open_ringbuf('test_ringbuf', buf_size=buf_size) as token,
for msg in msgs: attach_to_ringbuf_sender(token) as sender
await sender.send_all(msg) ):
await ctx.started(token)
print('writer started')
for msg in rng:
await sender.send(msg)
if rng.msgs_generated % rng.recommended_log_interval == 0:
print(f'wrote {rng.msgs_generated} msgs')
print('writer exit')
return rng.hexdigest
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -83,83 +123,90 @@ def test_ringbuf(
rand_max: int, rand_max: int,
buf_size: int buf_size: int
): ):
async def main(): '''
with open_ringbuf( - Open a new ring buf on root actor
'test_ringbuf', - Open `child_write_shm` ctx in sub-actor which will generate a
buf_size=buf_size random payload and send its hash on `ctx.started`, finally sending
) as token: the payload through the stream.
proc_kwargs = { - Open `child_read_shm` ctx in sub-actor which will receive the
'pass_fds': (token.write_eventfd, token.wrap_eventfd) payload, calculate perf stats and return the hash.
} - Compare both hashes
common_kwargs = { '''
'msg_amount': msg_amount, async def main():
'token': token,
}
async with tractor.open_nursery() as an: async with tractor.open_nursery() as an:
send_p = await an.start_actor( send_p = await an.start_actor(
'ring_sender', 'ring_sender',
enable_modules=[__name__], enable_modules=[
proc_kwargs=proc_kwargs __name__,
'tractor.linux._fdshare'
],
) )
recv_p = await an.start_actor( recv_p = await an.start_actor(
'ring_receiver', 'ring_receiver',
enable_modules=[__name__], enable_modules=[
proc_kwargs=proc_kwargs __name__,
'tractor.linux._fdshare'
],
) )
async with ( async with (
send_p.open_context( send_p.open_context(
child_write_shm, child_write_shm,
msg_amount=msg_amount,
rand_min=rand_min, rand_min=rand_min,
rand_max=rand_max, rand_max=rand_max,
**common_kwargs buf_size=buf_size
) as (sctx, total_bytes), ) as (sctx, token),
recv_p.open_context( recv_p.open_context(
child_read_shm, child_read_shm,
**common_kwargs, token=token,
total_bytes=total_bytes, ) as (rctx, _),
) as (sctx, _sent),
): ):
await recv_p.result() sent_hash = await sctx.result()
recvd_hash = await rctx.result()
await send_p.cancel_actor() assert sent_hash == recvd_hash
await recv_p.cancel_actor()
await an.cancel()
trio.run(main) trio.run(main)
@tractor.context @tractor.context
async def child_blocked_receiver( async def child_blocked_receiver(ctx: tractor.Context):
ctx: tractor.Context, async with (
token: RBToken open_ringbuf('test_ring_cancel_reader') as token,
):
async with RingBuffReceiver(token) as receiver: attach_to_ringbuf_receiver(token) as receiver
await ctx.started() ):
await ctx.started(token)
await receiver.receive_some() await receiver.receive_some()
def test_ring_reader_cancel(): def test_reader_cancel():
'''
Test that a receiver blocked on eventfd(2) read responds to
cancellation.
'''
async def main(): async def main():
with open_ringbuf('test_ring_cancel_reader') as token: async with tractor.open_nursery() as an:
async with (
tractor.open_nursery() as an,
RingBuffSender(token) as _sender,
):
recv_p = await an.start_actor( recv_p = await an.start_actor(
'ring_blocked_receiver', 'ring_blocked_receiver',
enable_modules=[__name__], enable_modules=[
proc_kwargs={ __name__,
'pass_fds': (token.write_eventfd, token.wrap_eventfd) 'tractor.linux._fdshare'
} ],
) )
async with ( async with (
recv_p.open_context( recv_p.open_context(
child_blocked_receiver, child_blocked_receiver,
token=token ) as (sctx, token),
) as (sctx, _sent),
attach_to_ringbuf_sender(token),
): ):
await trio.sleep(1) await trio.sleep(.1)
await an.cancel() await an.cancel()
@ -168,38 +215,166 @@ def test_ring_reader_cancel():
@tractor.context @tractor.context
async def child_blocked_sender( async def child_blocked_sender(ctx: tractor.Context):
ctx: tractor.Context, async with (
token: RBToken open_ringbuf(
): 'test_ring_cancel_sender',
async with RingBuffSender(token) as sender: buf_size=1
await ctx.started() ) as token,
attach_to_ringbuf_sender(token) as sender
):
await ctx.started(token)
await sender.send_all(b'this will wrap') await sender.send_all(b'this will wrap')
def test_ring_sender_cancel(): def test_sender_cancel():
'''
Test that a sender blocked on eventfd(2) read responds to
cancellation.
'''
async def main(): async def main():
with open_ringbuf(
'test_ring_cancel_sender',
buf_size=1
) as token:
async with tractor.open_nursery() as an: async with tractor.open_nursery() as an:
recv_p = await an.start_actor( recv_p = await an.start_actor(
'ring_blocked_sender', 'ring_blocked_sender',
enable_modules=[__name__], enable_modules=[
proc_kwargs={ __name__,
'pass_fds': (token.write_eventfd, token.wrap_eventfd) 'tractor.linux._fdshare'
} ],
) )
async with ( async with (
recv_p.open_context( recv_p.open_context(
child_blocked_sender, child_blocked_sender,
token=token ) as (sctx, token),
) as (sctx, _sent),
attach_to_ringbuf_receiver(token)
): ):
await trio.sleep(1) await trio.sleep(.1)
await an.cancel() await an.cancel()
with pytest.raises(tractor._exceptions.ContextCancelled): with pytest.raises(tractor._exceptions.ContextCancelled):
trio.run(main) trio.run(main)
def test_receiver_max_bytes():
'''
Test that RingBuffReceiver.receive_some's max_bytes optional
argument works correctly, send a msg of size 100, then
force receive of messages with max_bytes == 1, wait until
100 of these messages are received, then compare join of
msgs with original message
'''
msg = generate_single_byte_msgs(100)
msgs = []
rb_common = {
'cleanup': False,
'is_ipc': False
}
async def main():
async with (
open_ringbuf(
'test_ringbuf_max_bytes',
buf_size=10,
is_ipc=False
) as token,
trio.open_nursery() as n,
attach_to_ringbuf_sender(token, **rb_common) as sender,
attach_to_ringbuf_receiver(token, **rb_common) as receiver
):
async def _send_and_close():
await sender.send_all(msg)
await sender.aclose()
n.start_soon(_send_and_close)
while len(msgs) < len(msg):
msg_part = await receiver.receive_some(max_bytes=1)
assert len(msg_part) == 1
msgs.append(msg_part)
trio.run(main)
assert msg == b''.join(msgs)
@tractor.context
async def child_channel_sender(
ctx: tractor.Context,
msg_amount_min: int,
msg_amount_max: int,
token_in: RBToken,
token_out: RBToken
):
import random
rng = RandomBytesGenerator(
random.randint(msg_amount_min, msg_amount_max),
rand_min=256,
rand_max=1024,
)
async with attach_to_ringbuf_channel(
token_in,
token_out
) as chan:
await ctx.started()
for msg in rng:
await chan.send(msg)
await chan.send(b'bye')
await chan.receive()
return rng.hexdigest
def test_channel():
msg_amount_min = 100
msg_amount_max = 1000
mods = [
__name__,
'tractor.linux._fdshare'
]
async def main():
async with (
tractor.open_nursery(enable_modules=mods) as an,
open_ringbuf_pair(
'test_ringbuf_transport'
) as (send_token, recv_token),
attach_to_ringbuf_channel(send_token, recv_token) as chan,
):
sender = await an.start_actor(
'test_ringbuf_transport_sender',
enable_modules=mods,
)
async with (
sender.open_context(
child_channel_sender,
msg_amount_min=msg_amount_min,
msg_amount_max=msg_amount_max,
token_in=recv_token,
token_out=send_token
) as (ctx, _),
):
recvd_hash = hashlib.sha256()
async for msg in chan:
if msg == b'bye':
await chan.send(b'bye')
break
recvd_hash.update(msg)
sent_hash = await ctx.result()
assert recvd_hash.hexdigest() == sent_hash
await an.cancel()
trio.run(main)

View File

@ -0,0 +1,85 @@
'''
Runtime boot/init sanity.
'''
import pytest
import trio
import tractor
from tractor._exceptions import RuntimeFailure
@tractor.context
async def open_new_root_in_sub(
ctx: tractor.Context,
) -> None:
async with tractor.open_root_actor():
pass
@pytest.mark.parametrize(
'open_root_in',
['root', 'sub'],
ids='open_2nd_root_in={}'.format,
)
def test_only_one_root_actor(
open_root_in: str,
reg_addr: tuple,
debug_mode: bool
):
'''
Verify we specially fail whenever more then one root actor
is attempted to be opened within an already opened tree.
'''
async def main():
async with tractor.open_nursery() as an:
if open_root_in == 'root':
async with tractor.open_root_actor(
registry_addrs=[reg_addr],
):
pass
ptl: tractor.Portal = await an.start_actor(
name='bad_rooty_boi',
enable_modules=[__name__],
)
async with ptl.open_context(
open_new_root_in_sub,
) as (ctx, first):
pass
if open_root_in == 'root':
with pytest.raises(
RuntimeFailure
) as excinfo:
trio.run(main)
else:
with pytest.raises(
tractor.RemoteActorError,
) as excinfo:
trio.run(main)
assert excinfo.value.boxed_type is RuntimeFailure
def test_implicit_root_via_first_nursery(
reg_addr: tuple,
debug_mode: bool
):
'''
The first `ActorNursery` open should implicitly call
`_root.open_root_actor()`.
'''
async def main():
async with tractor.open_nursery() as an:
assert an._implicit_runtime_started
assert tractor.current_actor().aid.name == 'root'
trio.run(main)

View File

@ -2,6 +2,7 @@
Spawning basics Spawning basics
""" """
from functools import partial
from typing import ( from typing import (
Any, Any,
) )
@ -12,74 +13,99 @@ import tractor
from tractor._testing import tractor_test from tractor._testing import tractor_test
data_to_pass_down = {'doggy': 10, 'kitty': 4} data_to_pass_down = {
'doggy': 10,
'kitty': 4,
}
async def spawn( async def spawn(
is_arbiter: bool, should_be_root: bool,
data: dict, data: dict,
reg_addr: tuple[str, int], reg_addr: tuple[str, int],
debug_mode: bool = False,
): ):
namespaces = [__name__]
await trio.sleep(0.1) await trio.sleep(0.1)
actor = tractor.current_actor(err_on_no_runtime=False)
async with tractor.open_root_actor( if should_be_root:
assert actor is None # no runtime yet
async with (
tractor.open_root_actor(
arbiter_addr=reg_addr, arbiter_addr=reg_addr,
),
tractor.open_nursery() as an,
): ):
actor = tractor.current_actor() # now runtime exists
assert actor.is_arbiter == is_arbiter actor: tractor.Actor = tractor.current_actor()
data = data_to_pass_down assert actor.is_arbiter == should_be_root
if actor.is_arbiter: # spawns subproc here
async with tractor.open_nursery() as nursery: portal: tractor.Portal = await an.run_in_actor(
fn=spawn,
# forks here # spawning args
portal = await nursery.run_in_actor(
spawn,
is_arbiter=False,
name='sub-actor', name='sub-actor',
data=data, enable_modules=[__name__],
# passed to a subactor-recursive RPC invoke
# of this same `spawn()` fn.
should_be_root=False,
data=data_to_pass_down,
reg_addr=reg_addr, reg_addr=reg_addr,
enable_modules=namespaces,
) )
assert len(nursery._children) == 1 assert len(an._children) == 1
assert portal.channel.uid in tractor.current_actor()._peers assert (
# be sure we can still get the result portal.channel.uid
in
tractor.current_actor().ipc_server._peers
)
# get result from child subactor
result = await portal.result() result = await portal.result()
assert result == 10 assert result == 10
return result return result
else: else:
assert actor.is_arbiter == should_be_root
return 10 return 10
def test_local_arbiter_subactor_global_state( def test_run_in_actor_same_func_in_child(
reg_addr, reg_addr: tuple,
debug_mode: bool,
): ):
result = trio.run( result = trio.run(
partial(
spawn, spawn,
True, should_be_root=True,
data_to_pass_down, data=data_to_pass_down,
reg_addr, reg_addr=reg_addr,
debug_mode=debug_mode,
)
) )
assert result == 10 assert result == 10
async def movie_theatre_question(): async def movie_theatre_question():
"""A question asked in a dark theatre, in a tangent '''
A question asked in a dark theatre, in a tangent
(errr, I mean different) process. (errr, I mean different) process.
"""
'''
return 'have you ever seen a portal?' return 'have you ever seen a portal?'
@tractor_test @tractor_test
async def test_movie_theatre_convo(start_method): async def test_movie_theatre_convo(start_method):
"""The main ``tractor`` routine. '''
""" The main ``tractor`` routine.
async with tractor.open_nursery() as n:
portal = await n.start_actor( '''
async with tractor.open_nursery(debug_mode=True) as an:
portal = await an.start_actor(
'frank', 'frank',
# enable the actor to run funcs from this current module # enable the actor to run funcs from this current module
enable_modules=[__name__], enable_modules=[__name__],
@ -118,8 +144,8 @@ async def test_most_beautiful_word(
with trio.fail_after(1): with trio.fail_after(1):
async with tractor.open_nursery( async with tractor.open_nursery(
debug_mode=debug_mode, debug_mode=debug_mode,
) as n: ) as an:
portal = await n.run_in_actor( portal = await an.run_in_actor(
cellar_door, cellar_door,
return_value=return_value, return_value=return_value,
name='some_linguist', name='some_linguist',

View File

@ -180,6 +180,7 @@ def test_acm_embedded_nursery_propagates_enter_err(
with tractor.devx.maybe_open_crash_handler( with tractor.devx.maybe_open_crash_handler(
pdb=debug_mode, pdb=debug_mode,
) as bxerr: ) as bxerr:
if bxerr:
assert not bxerr.value assert not bxerr.value
async with ( async with (

282
tractor/_addr.py 100644
View File

@ -0,0 +1,282 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
from __future__ import annotations
from uuid import uuid4
from typing import (
Protocol,
ClassVar,
Type,
TYPE_CHECKING,
)
from bidict import bidict
from trio import (
SocketListener,
)
from .log import get_logger
from ._state import (
_def_tpt_proto,
)
from .ipc._tcp import TCPAddress
from .ipc._uds import UDSAddress
if TYPE_CHECKING:
from ._runtime import Actor
log = get_logger(__name__)
# TODO, maybe breakout the netns key to a struct?
# class NetNs(Struct)[str, int]:
# ...
# TODO, can't we just use a type alias
# for this? namely just some `tuple[str, int, str, str]`?
#
# -[ ] would also just be simpler to keep this as SockAddr[tuple]
# or something, implying it's just a simple pair of values which can
# presumably be mapped to all transports?
# -[ ] `pydoc socket.socket.getsockname()` delivers a 4-tuple for
# ipv6 `(hostaddr, port, flowinfo, scope_id)`.. so how should we
# handle that?
# -[ ] as a further alternative to this wrap()/unwrap() approach we
# could just implement `enc/dec_hook()`s for the `Address`-types
# and just deal with our internal objs directly and always and
# leave it to the codec layer to figure out marshalling?
# |_ would mean only one spot to do the `.unwrap()` (which we may
# end up needing to call from the hook()s anyway?)
# -[x] rename to `UnwrappedAddress[Descriptor]` ??
# seems like the right name as per,
# https://www.geeksforgeeks.org/introduction-to-address-descriptor/
#
UnwrappedAddress = (
# tcp/udp/uds
tuple[
str, # host/domain(tcp), filesys-dir(uds)
int|str, # port/path(uds)
]
# ?TODO? should we also include another 2 fields from
# our `Aid` msg such that we include the runtime `Actor.uid`
# of `.name` and `.uuid`?
# - would ensure uniqueness across entire net?
# - allows for easier runtime-level filtering of "actors by
# service name"
)
# TODO, maybe rename to `SocketAddress`?
class Address(Protocol):
proto_key: ClassVar[str]
unwrapped_type: ClassVar[UnwrappedAddress]
# TODO, i feel like an `.is_bound()` is a better thing to
# support?
# Lke, what use does this have besides a noop and if it's not
# valid why aren't we erroring on creation/use?
@property
def is_valid(self) -> bool:
...
# TODO, maybe `.netns` is a better name?
@property
def namespace(self) -> tuple[str, int]|None:
'''
The if-available, OS-specific "network namespace" key.
'''
...
@property
def bindspace(self) -> str:
'''
Deliver the socket address' "bindable space" from
a `socket.socket.bind()` and thus from the perspective of
specific transport protocol domain.
I.e. for most (layer-4) network-socket protocols this is
normally the ipv4/6 address, for UDS this is normally
a filesystem (sub-directory).
For (distributed) network protocols this is normally the routing
layer's domain/(ip-)address, though it might also include a "network namespace"
key different then the default.
For local-host-only transports this is either an explicit
namespace (with types defined by the OS: netns, Cgroup, IPC,
pid, etc. on linux) or failing that the sub-directory in the
filesys in which socket/shm files are located *under*.
'''
...
@classmethod
def from_addr(cls, addr: UnwrappedAddress) -> Address:
...
def unwrap(self) -> UnwrappedAddress:
'''
Deliver the underying minimum field set in
a primitive python data type-structure.
'''
...
@classmethod
def get_random(
cls,
current_actor: Actor,
bindspace: str|None = None,
) -> Address:
...
# TODO, this should be something like a `.get_def_registar_addr()`
# or similar since,
# - it should be a **host singleton** (not root/tree singleton)
# - we **only need this value** when one isn't provided to the
# runtime at boot and we want to implicitly provide a host-wide
# registrar.
# - each rooted-actor-tree should likely have its own
# micro-registry (likely the root being it), also see
@classmethod
def get_root(cls) -> Address:
...
def __repr__(self) -> str:
...
def __eq__(self, other) -> bool:
...
async def open_listener(
self,
**kwargs,
) -> SocketListener:
...
async def close_listener(self):
...
_address_types: bidict[str, Type[Address]] = {
'tcp': TCPAddress,
'uds': UDSAddress
}
# TODO! really these are discovery sys default addrs ONLY useful for
# when none is provided to a root actor on first boot.
_default_lo_addrs: dict[
str,
UnwrappedAddress
] = {
'tcp': TCPAddress.get_root().unwrap(),
'uds': UDSAddress.get_root().unwrap(),
}
def get_address_cls(name: str) -> Type[Address]:
return _address_types[name]
def is_wrapped_addr(addr: any) -> bool:
return type(addr) in _address_types.values()
def mk_uuid() -> str:
'''
Encapsulate creation of a uuid4 as `str` as used
for creating `Actor.uid: tuple[str, str]` and/or
`.msg.types.Aid`.
'''
return str(uuid4())
def wrap_address(
addr: UnwrappedAddress
) -> Address:
'''
Wrap an `UnwrappedAddress` as an `Address`-type based
on matching builtin python data-structures which we adhoc
use for each.
XXX NOTE, careful care must be placed to ensure
`UnwrappedAddress` cases are **definitely unique** otherwise the
wrong transport backend may be loaded and will break many
low-level things in our runtime in a not-fun-to-debug way!
XD
'''
if is_wrapped_addr(addr):
return addr
cls: Type|None = None
# if 'sock' in addr[0]:
# import pdbp; pdbp.set_trace()
match addr:
# classic network socket-address as tuple/list
case (
(str(), int())
|
[str(), int()]
):
cls = TCPAddress
case (
# (str()|Path(), str()|Path()),
# ^TODO? uhh why doesn't this work!?
(_, filename)
) if type(filename) is str:
cls = UDSAddress
# likely an unset UDS or TCP reg address as defaulted in
# `_state._runtime_vars['_root_mailbox']`
#
# TODO? figure out when/if we even need this?
case (
None
|
[None, None]
):
cls: Type[Address] = get_address_cls(_def_tpt_proto)
addr: UnwrappedAddress = cls.get_root().unwrap()
case _:
# import pdbp; pdbp.set_trace()
raise TypeError(
f'Can not wrap unwrapped-address ??\n'
f'type(addr): {type(addr)!r}\n'
f'addr: {addr!r}\n'
)
return cls.from_addr(addr)
def default_lo_addrs(
transports: list[str],
) -> list[Type[Address]]:
'''
Return the default, host-singleton, registry address
for an input transport key set.
'''
return [
_default_lo_addrs[transport]
for transport in transports
]

View File

@ -31,8 +31,12 @@ def parse_uid(arg):
return str(name), str(uuid) # ensures str encoding return str(name), str(uuid) # ensures str encoding
def parse_ipaddr(arg): def parse_ipaddr(arg):
host, port = literal_eval(arg) try:
return (str(host), int(port)) return literal_eval(arg)
except (ValueError, SyntaxError):
# UDS: try to interpret as a straight up str
return arg
if __name__ == "__main__": if __name__ == "__main__":
@ -46,8 +50,8 @@ if __name__ == "__main__":
args = parser.parse_args() args = parser.parse_args()
subactor = Actor( subactor = Actor(
args.uid[0], name=args.uid[0],
uid=args.uid[1], uuid=args.uid[1],
loglevel=args.loglevel, loglevel=args.loglevel,
spawn_method="trio" spawn_method="trio"
) )

View File

@ -105,7 +105,7 @@ from ._state import (
if TYPE_CHECKING: if TYPE_CHECKING:
from ._portal import Portal from ._portal import Portal
from ._runtime import Actor from ._runtime import Actor
from .ipc import MsgTransport from .ipc._transport import MsgTransport
from .devx._frame_stack import ( from .devx._frame_stack import (
CallerInfo, CallerInfo,
) )
@ -366,7 +366,7 @@ class Context:
# f' ---\n' # f' ---\n'
f' |_ipc: {self.dst_maddr}\n' f' |_ipc: {self.dst_maddr}\n'
# f' dst_maddr{ds}{self.dst_maddr}\n' # f' dst_maddr{ds}{self.dst_maddr}\n'
f" uid{ds}'{self.chan.uid}'\n" f" uid{ds}'{self.chan.aid}'\n"
f" cid{ds}'{self.cid}'\n" f" cid{ds}'{self.cid}'\n"
# f' ---\n' # f' ---\n'
f'\n' f'\n'
@ -859,19 +859,10 @@ class Context:
@property @property
def dst_maddr(self) -> str: def dst_maddr(self) -> str:
chan: Channel = self.chan chan: Channel = self.chan
dst_addr, dst_port = chan.raddr
trans: MsgTransport = chan.transport trans: MsgTransport = chan.transport
# cid: str = self.cid # cid: str = self.cid
# cid_head, cid_tail = cid[:6], cid[-6:] # cid_head, cid_tail = cid[:6], cid[-6:]
return ( return trans.maddr
f'/ipv4/{dst_addr}'
f'/{trans.name_key}/{dst_port}'
# f'/{self.chan.uid[0]}'
# f'/{self.cid}'
# f'/cid={cid_head}..{cid_tail}'
# TODO: ? not use this ^ right ?
)
dmaddr = dst_maddr dmaddr = dst_maddr
@ -954,10 +945,10 @@ class Context:
reminfo: str = ( reminfo: str = (
# ' =>\n' # ' =>\n'
# f'Context.cancel() => {self.chan.uid}\n' # f'Context.cancel() => {self.chan.uid}\n'
f'\n'
f'c)=> {self.chan.uid}\n' f'c)=> {self.chan.uid}\n'
# f'{self.chan.uid}\n' f' |_[{self.dst_maddr}\n'
f' |_ @{self.dst_maddr}\n' f' >>{self.repr_rpc}\n'
f' >> {self.repr_rpc}\n'
# f' >> {self._nsf}() -> {codec}[dict]:\n\n' # f' >> {self._nsf}() -> {codec}[dict]:\n\n'
# TODO: pull msg-type from spec re #320 # TODO: pull msg-type from spec re #320
) )

View File

@ -30,6 +30,11 @@ from contextlib import asynccontextmanager as acm
from tractor.log import get_logger from tractor.log import get_logger
from .trionics import gather_contexts from .trionics import gather_contexts
from .ipc import _connect_chan, Channel from .ipc import _connect_chan, Channel
from ._addr import (
UnwrappedAddress,
Address,
wrap_address
)
from ._portal import ( from ._portal import (
Portal, Portal,
open_portal, open_portal,
@ -38,10 +43,12 @@ from ._portal import (
from ._state import ( from ._state import (
current_actor, current_actor,
_runtime_vars, _runtime_vars,
_def_tpt_proto,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from ._runtime import Actor from ._runtime import Actor
from .ipc._server import IPCServer
log = get_logger(__name__) log = get_logger(__name__)
@ -49,9 +56,7 @@ log = get_logger(__name__)
@acm @acm
async def get_registry( async def get_registry(
host: str, addr: UnwrappedAddress|None = None,
port: int,
) -> AsyncGenerator[ ) -> AsyncGenerator[
Portal | LocalPortal | None, Portal | LocalPortal | None,
None, None,
@ -69,13 +74,15 @@ async def get_registry(
# (likely a re-entrant call from the arbiter actor) # (likely a re-entrant call from the arbiter actor)
yield LocalPortal( yield LocalPortal(
actor, actor,
Channel((host, port)) Channel(transport=None)
# ^XXX, we DO NOT actually provide nor connect an
# underlying transport since this is merely an API shim.
) )
else: else:
# TODO: try to look pre-existing connection from # TODO: try to look pre-existing connection from
# `Actor._peers` and use it instead? # `IPCServer._peers` and use it instead?
async with ( async with (
_connect_chan(host, port) as chan, _connect_chan(addr) as chan,
open_portal(chan) as regstr_ptl, open_portal(chan) as regstr_ptl,
): ):
yield regstr_ptl yield regstr_ptl
@ -89,11 +96,10 @@ async def get_root(
# TODO: rename mailbox to `_root_maddr` when we finally # TODO: rename mailbox to `_root_maddr` when we finally
# add and impl libp2p multi-addrs? # add and impl libp2p multi-addrs?
host, port = _runtime_vars['_root_mailbox'] addr = _runtime_vars['_root_mailbox']
assert host is not None
async with ( async with (
_connect_chan(host, port) as chan, _connect_chan(addr) as chan,
open_portal(chan, **kwargs) as portal, open_portal(chan, **kwargs) as portal,
): ):
yield portal yield portal
@ -106,17 +112,23 @@ def get_peer_by_name(
) -> list[Channel]|None: # at least 1 ) -> list[Channel]|None: # at least 1
''' '''
Scan for an existing connection (set) to a named actor Scan for an existing connection (set) to a named actor
and return any channels from `Actor._peers`. and return any channels from `IPCServer._peers: dict`.
This is an optimization method over querying the registrar for This is an optimization method over querying the registrar for
the same info. the same info.
''' '''
actor: Actor = current_actor() actor: Actor = current_actor()
to_scan: dict[tuple, list[Channel]] = actor._peers.copy() server: IPCServer = actor.ipc_server
pchan: Channel|None = actor._parent_chan to_scan: dict[tuple, list[Channel]] = server._peers.copy()
if pchan:
to_scan[pchan.uid].append(pchan) # TODO: is this ever needed? creates a duplicate channel on actor._peers
# when multiple find_actor calls are made to same actor from a single ctx
# which causes actor exit to hang waiting forever on
# `actor._no_more_peers.wait()` in `_runtime.async_main`
# pchan: Channel|None = actor._parent_chan
# if pchan:
# to_scan[pchan.uid].append(pchan)
for aid, chans in to_scan.items(): for aid, chans in to_scan.items():
_, peer_name = aid _, peer_name = aid
@ -134,10 +146,10 @@ def get_peer_by_name(
@acm @acm
async def query_actor( async def query_actor(
name: str, name: str,
regaddr: tuple[str, int]|None = None, regaddr: UnwrappedAddress|None = None,
) -> AsyncGenerator[ ) -> AsyncGenerator[
tuple[str, int]|None, UnwrappedAddress|None,
None, None,
]: ]:
''' '''
@ -163,31 +175,31 @@ async def query_actor(
return return
reg_portal: Portal reg_portal: Portal
regaddr: tuple[str, int] = regaddr or actor.reg_addrs[0] regaddr: Address = wrap_address(regaddr) or actor.reg_addrs[0]
async with get_registry(*regaddr) as reg_portal: async with get_registry(regaddr) as reg_portal:
# TODO: return portals to all available actors - for now # TODO: return portals to all available actors - for now
# just the last one that registered # just the last one that registered
sockaddr: tuple[str, int] = await reg_portal.run_from_ns( addr: UnwrappedAddress = await reg_portal.run_from_ns(
'self', 'self',
'find_actor', 'find_actor',
name=name, name=name,
) )
yield sockaddr yield addr
@acm @acm
async def maybe_open_portal( async def maybe_open_portal(
addr: tuple[str, int], addr: UnwrappedAddress,
name: str, name: str,
): ):
async with query_actor( async with query_actor(
name=name, name=name,
regaddr=addr, regaddr=addr,
) as sockaddr: ) as addr:
pass pass
if sockaddr: if addr:
async with _connect_chan(*sockaddr) as chan: async with _connect_chan(addr) as chan:
async with open_portal(chan) as portal: async with open_portal(chan) as portal:
yield portal yield portal
else: else:
@ -197,7 +209,8 @@ async def maybe_open_portal(
@acm @acm
async def find_actor( async def find_actor(
name: str, name: str,
registry_addrs: list[tuple[str, int]]|None = None, registry_addrs: list[UnwrappedAddress]|None = None,
enable_transports: list[str] = [_def_tpt_proto],
only_first: bool = True, only_first: bool = True,
raise_on_none: bool = False, raise_on_none: bool = False,
@ -224,15 +237,15 @@ async def find_actor(
# XXX NOTE: make sure to dynamically read the value on # XXX NOTE: make sure to dynamically read the value on
# every call since something may change it globally (eg. # every call since something may change it globally (eg.
# like in our discovery test suite)! # like in our discovery test suite)!
from . import _root from ._addr import default_lo_addrs
registry_addrs = ( registry_addrs = (
_runtime_vars['_registry_addrs'] _runtime_vars['_registry_addrs']
or or
_root._default_lo_addrs default_lo_addrs(enable_transports)
) )
maybe_portals: list[ maybe_portals: list[
AsyncContextManager[tuple[str, int]] AsyncContextManager[UnwrappedAddress]
] = list( ] = list(
maybe_open_portal( maybe_open_portal(
addr=addr, addr=addr,
@ -274,7 +287,7 @@ async def find_actor(
@acm @acm
async def wait_for_actor( async def wait_for_actor(
name: str, name: str,
registry_addr: tuple[str, int] | None = None, registry_addr: UnwrappedAddress | None = None,
) -> AsyncGenerator[Portal, None]: ) -> AsyncGenerator[Portal, None]:
''' '''
@ -291,7 +304,7 @@ async def wait_for_actor(
yield peer_portal yield peer_portal
return return
regaddr: tuple[str, int] = ( regaddr: UnwrappedAddress = (
registry_addr registry_addr
or or
actor.reg_addrs[0] actor.reg_addrs[0]
@ -299,8 +312,8 @@ async def wait_for_actor(
# TODO: use `.trionics.gather_contexts()` like # TODO: use `.trionics.gather_contexts()` like
# above in `find_actor()` as well? # above in `find_actor()` as well?
reg_portal: Portal reg_portal: Portal
async with get_registry(*regaddr) as reg_portal: async with get_registry(regaddr) as reg_portal:
sockaddrs = await reg_portal.run_from_ns( addrs = await reg_portal.run_from_ns(
'self', 'self',
'wait_for_actor', 'wait_for_actor',
name=name, name=name,
@ -308,8 +321,8 @@ async def wait_for_actor(
# get latest registered addr by default? # get latest registered addr by default?
# TODO: offer multi-portal yields in multi-homed case? # TODO: offer multi-portal yields in multi-homed case?
sockaddr: tuple[str, int] = sockaddrs[-1] addr: UnwrappedAddress = addrs[-1]
async with _connect_chan(*sockaddr) as chan: async with _connect_chan(addr) as chan:
async with open_portal(chan) as portal: async with open_portal(chan) as portal:
yield portal yield portal

View File

@ -37,6 +37,7 @@ from .log import (
from . import _state from . import _state
from .devx import _debug from .devx import _debug
from .to_asyncio import run_as_asyncio_guest from .to_asyncio import run_as_asyncio_guest
from ._addr import UnwrappedAddress
from ._runtime import ( from ._runtime import (
async_main, async_main,
Actor, Actor,
@ -52,10 +53,10 @@ log = get_logger(__name__)
def _mp_main( def _mp_main(
actor: Actor, actor: Actor,
accept_addrs: list[tuple[str, int]], accept_addrs: list[UnwrappedAddress],
forkserver_info: tuple[Any, Any, Any, Any, Any], forkserver_info: tuple[Any, Any, Any, Any, Any],
start_method: SpawnMethodKey, start_method: SpawnMethodKey,
parent_addr: tuple[str, int] | None = None, parent_addr: UnwrappedAddress | None = None,
infect_asyncio: bool = False, infect_asyncio: bool = False,
) -> None: ) -> None:
@ -206,7 +207,7 @@ def nest_from_op(
def _trio_main( def _trio_main(
actor: Actor, actor: Actor,
*, *,
parent_addr: tuple[str, int] | None = None, parent_addr: UnwrappedAddress|None = None,
infect_asyncio: bool = False, infect_asyncio: bool = False,
) -> None: ) -> None:

View File

@ -23,7 +23,6 @@ import builtins
import importlib import importlib
from pprint import pformat from pprint import pformat
from pdb import bdb from pdb import bdb
import sys
from types import ( from types import (
TracebackType, TracebackType,
) )
@ -72,8 +71,22 @@ log = get_logger('tractor')
_this_mod = importlib.import_module(__name__) _this_mod = importlib.import_module(__name__)
class ActorFailure(Exception): class RuntimeFailure(RuntimeError):
"General actor failure" '''
General `Actor`-runtime failure due to,
- a bad runtime-env,
- falied spawning (bad input to process),
- API usage.
'''
class ActorFailure(RuntimeFailure):
'''
`Actor` failed to boot before/after spawn
'''
class InternalError(RuntimeError): class InternalError(RuntimeError):
@ -126,6 +139,12 @@ class TrioTaskExited(Exception):
''' '''
class DebugRequestError(RuntimeError):
'''
Failed to request stdio lock from root actor!
'''
# NOTE: more or less should be close to these: # NOTE: more or less should be close to these:
# 'boxed_type', # 'boxed_type',
# 'src_type', # 'src_type',
@ -191,6 +210,8 @@ def get_err_type(type_name: str) -> BaseException|None:
): ):
return type_ref return type_ref
return None
def pack_from_raise( def pack_from_raise(
local_err: ( local_err: (
@ -521,7 +542,6 @@ class RemoteActorError(Exception):
if val: if val:
_repr += f'{key}={val_str}{end_char}' _repr += f'{key}={val_str}{end_char}'
return _repr return _repr
def reprol(self) -> str: def reprol(self) -> str:
@ -600,56 +620,9 @@ class RemoteActorError(Exception):
the type name is already implicitly shown by python). the type name is already implicitly shown by python).
''' '''
header: str = ''
body: str = ''
message: str = ''
# XXX when the currently raised exception is this instance,
# we do not ever use the "type header" style repr.
is_being_raised: bool = False
if (
(exc := sys.exception())
and
exc is self
):
is_being_raised: bool = True
with_type_header: bool = (
with_type_header
and
not is_being_raised
)
# <RemoteActorError( .. )> style
if with_type_header:
header: str = f'<{type(self).__name__}('
if message := self._message:
# split off the first line so, if needed, it isn't
# indented the same like the "boxed content" which
# since there is no `.tb_str` is just the `.message`.
lines: list[str] = message.splitlines()
first: str = lines[0]
message: str = message.removeprefix(first)
# with a type-style header we,
# - have no special message "first line" extraction/handling
# - place the message a space in from the header:
# `MsgTypeError( <message> ..`
# ^-here
# - indent the `.message` inside the type body.
if with_type_header:
first = f' {first} )>'
message: str = textwrap.indent(
message,
prefix=' '*2,
)
message: str = first + message
# IFF there is an embedded traceback-str we always # IFF there is an embedded traceback-str we always
# draw the ascii-box around it. # draw the ascii-box around it.
body: str = ''
if tb_str := self.tb_str: if tb_str := self.tb_str:
fields: str = self._mk_fields_str( fields: str = self._mk_fields_str(
_body_fields _body_fields
@ -670,21 +643,15 @@ class RemoteActorError(Exception):
boxer_header=self.relay_uid, boxer_header=self.relay_uid,
) )
tail = '' # !TODO, it'd be nice to import these top level without
if ( # cycles!
with_type_header from tractor.devx.pformat import (
and not message pformat_exc,
): )
tail: str = '>' return pformat_exc(
exc=self,
return ( with_type_header=with_type_header,
header body=body,
+
message
+
f'{body}'
+
tail
) )
__repr__ = pformat __repr__ = pformat
@ -962,7 +929,7 @@ class StreamOverrun(
''' '''
class TransportClosed(trio.BrokenResourceError): class TransportClosed(Exception):
''' '''
IPC transport (protocol) connection was closed or broke and IPC transport (protocol) connection was closed or broke and
indicates that the wrapping communication `Channel` can no longer indicates that the wrapping communication `Channel` can no longer
@ -973,24 +940,39 @@ class TransportClosed(trio.BrokenResourceError):
self, self,
message: str, message: str,
loglevel: str = 'transport', loglevel: str = 'transport',
cause: BaseException|None = None, src_exc: Exception|None = None,
raise_on_report: bool = False, raise_on_report: bool = False,
) -> None: ) -> None:
self.message: str = message self.message: str = message
self._loglevel = loglevel self._loglevel: str = loglevel
super().__init__(message) super().__init__(message)
if cause is not None: self._src_exc = src_exc
self.__cause__ = cause # set the cause manually if not already set by python
if (
src_exc is not None
and
not self.__cause__
):
self.__cause__ = src_exc
# flag to toggle whether the msg loop should raise # flag to toggle whether the msg loop should raise
# the exc in its `TransportClosed` handler block. # the exc in its `TransportClosed` handler block.
self._raise_on_report = raise_on_report self._raise_on_report = raise_on_report
@property
def src_exc(self) -> Exception:
return (
self.__cause__
or
self._src_exc
)
def report_n_maybe_raise( def report_n_maybe_raise(
self, self,
message: str|None = None, message: str|None = None,
hide_tb: bool = True,
) -> None: ) -> None:
''' '''
@ -998,9 +980,10 @@ class TransportClosed(trio.BrokenResourceError):
for this error. for this error.
''' '''
__tracebackhide__: bool = hide_tb
message: str = message or self.message message: str = message or self.message
# when a cause is set, slap it onto the log emission. # when a cause is set, slap it onto the log emission.
if cause := self.__cause__: if cause := self.src_exc:
cause_tb_str: str = ''.join( cause_tb_str: str = ''.join(
traceback.format_tb(cause.__traceback__) traceback.format_tb(cause.__traceback__)
) )
@ -1009,13 +992,86 @@ class TransportClosed(trio.BrokenResourceError):
f' {cause}\n' # exc repr f' {cause}\n' # exc repr
) )
getattr(log, self._loglevel)(message) getattr(
log,
self._loglevel
)(message)
# some errors we want to blow up from # some errors we want to blow up from
# inside the RPC msg loop # inside the RPC msg loop
if self._raise_on_report: if self._raise_on_report:
raise self from cause raise self from cause
@classmethod
def repr_src_exc(
self,
src_exc: Exception|None = None,
) -> str:
if src_exc is None:
return '<unknown>'
src_msg: tuple[str] = src_exc.args
src_exc_repr: str = (
f'{type(src_exc).__name__}[ {src_msg} ]'
)
return src_exc_repr
def pformat(self) -> str:
from tractor.devx.pformat import (
pformat_exc,
)
return pformat_exc(
exc=self,
)
# delegate to `str`-ified pformat
__repr__ = pformat
@classmethod
def from_src_exc(
cls,
src_exc: (
Exception|
trio.ClosedResource|
trio.BrokenResourceError
),
message: str,
body: str = '',
**init_kws,
) -> TransportClosed:
'''
Convenience constructor for creation from an underlying
`trio`-sourced async-resource/chan/stream error.
Embeds the original `src_exc`'s repr within the
`Exception.args` via a first-line-in-`.message`-put-in-header
pre-processing and allows inserting additional content beyond
the main message via a `body: str`.
'''
repr_src_exc: str = cls.repr_src_exc(
src_exc,
)
next_line: str = f' src_exc: {repr_src_exc}\n'
if body:
body: str = textwrap.indent(
body,
prefix=' '*2,
)
return TransportClosed(
message=(
message
+
next_line
+
body
),
src_exc=src_exc,
**init_kws,
)
class NoResult(RuntimeError): class NoResult(RuntimeError):
"No final result is expected for this actor" "No final result is expected for this actor"

View File

@ -52,8 +52,8 @@ from .msg import (
Return, Return,
) )
from ._exceptions import ( from ._exceptions import (
# unpack_error,
NoResult, NoResult,
TransportClosed,
) )
from ._context import ( from ._context import (
Context, Context,
@ -107,6 +107,10 @@ class Portal:
# point. # point.
self._expect_result_ctx: Context|None = None self._expect_result_ctx: Context|None = None
self._streams: set[MsgStream] = set() self._streams: set[MsgStream] = set()
# TODO, this should be PRIVATE (and never used publicly)! since it's just
# a cached ref to the local runtime instead of calling
# `current_actor()` everywhere.. XD
self.actor: Actor = current_actor() self.actor: Actor = current_actor()
@property @property
@ -171,7 +175,7 @@ class Portal:
# not expecting a "main" result # not expecting a "main" result
if self._expect_result_ctx is None: if self._expect_result_ctx is None:
log.warning( log.warning(
f"Portal for {self.channel.uid} not expecting a final" f"Portal for {self.channel.aid} not expecting a final"
" result?\nresult() should only be called if subactor" " result?\nresult() should only be called if subactor"
" was spawned with `ActorNursery.run_in_actor()`") " was spawned with `ActorNursery.run_in_actor()`")
return NoResult return NoResult
@ -218,7 +222,7 @@ class Portal:
# IPC calls # IPC calls
if self._streams: if self._streams:
log.cancel( log.cancel(
f"Cancelling all streams with {self.channel.uid}") f"Cancelling all streams with {self.channel.aid}")
for stream in self._streams.copy(): for stream in self._streams.copy():
try: try:
await stream.aclose() await stream.aclose()
@ -263,7 +267,7 @@ class Portal:
return False return False
reminfo: str = ( reminfo: str = (
f'c)=> {self.channel.uid}\n' f'c)=> {self.channel.aid}\n'
f' |_{chan}\n' f' |_{chan}\n'
) )
log.cancel( log.cancel(
@ -301,14 +305,34 @@ class Portal:
return False return False
except ( except (
# XXX, should never really get raised unless we aren't
# wrapping them in the below type by mistake?
#
# Leaving the catch here for now until we're very sure
# all the cases (for various tpt protos) have indeed been
# re-wrapped ;p
trio.ClosedResourceError, trio.ClosedResourceError,
trio.BrokenResourceError, trio.BrokenResourceError,
):
log.debug( TransportClosed,
'IPC chan for actor already closed or broken?\n\n' ) as tpt_err:
f'{self.channel.uid}\n' report: str = (
f'IPC chan for actor already closed or broken?\n\n'
f'{self.channel.aid}\n'
f' |_{self.channel}\n' f' |_{self.channel}\n'
) )
match tpt_err:
case TransportClosed():
log.debug(report)
case _:
report += (
f'\n'
f'Unhandled low-level transport-closed/error during\n'
f'Portal.cancel_actor()` request?\n'
f'<{type(tpt_err).__name__}( {tpt_err} )>\n'
)
log.warning(report)
return False return False
# TODO: do we still need this for low level `Actor`-runtime # TODO: do we still need this for low level `Actor`-runtime
@ -504,8 +528,12 @@ class LocalPortal:
return it's result. return it's result.
''' '''
obj = self.actor if ns == 'self' else importlib.import_module(ns) obj = (
func = getattr(obj, func_name) self.actor
if ns == 'self'
else importlib.import_module(ns)
)
func: Callable = getattr(obj, func_name)
return await func(**kwargs) return await func(**kwargs)
@ -543,15 +571,17 @@ async def open_portal(
await channel.connect() await channel.connect()
was_connected = True was_connected = True
if channel.uid is None: if channel.aid is None:
await actor._do_handshake(channel) await channel._do_handshake(
aid=actor.aid,
)
msg_loop_cs: trio.CancelScope|None = None msg_loop_cs: trio.CancelScope|None = None
if start_msg_loop: if start_msg_loop:
from ._runtime import process_messages from . import _rpc
msg_loop_cs = await tn.start( msg_loop_cs = await tn.start(
partial( partial(
process_messages, _rpc.process_messages,
actor, actor,
channel, channel,
# if the local task is cancelled we want to keep # if the local task is cancelled we want to keep

View File

@ -18,7 +18,9 @@
Root actor runtime ignition(s). Root actor runtime ignition(s).
''' '''
from contextlib import asynccontextmanager as acm from contextlib import (
asynccontextmanager as acm,
)
from functools import partial from functools import partial
import importlib import importlib
import inspect import inspect
@ -26,7 +28,10 @@ import logging
import os import os
import signal import signal
import sys import sys
from typing import Callable from typing import (
Any,
Callable,
)
import warnings import warnings
@ -43,33 +48,111 @@ from .devx import _debug
from . import _spawn from . import _spawn
from . import _state from . import _state
from . import log from . import log
from .ipc import _connect_chan from .ipc import (
from ._exceptions import is_multi_cancelled _connect_chan,
)
from ._addr import (
# set at startup and after forks Address,
_default_host: str = '127.0.0.1' UnwrappedAddress,
_default_port: int = 1616 default_lo_addrs,
mk_uuid,
# default registry always on localhost wrap_address,
_default_lo_addrs: list[tuple[str, int]] = [( )
_default_host, from ._exceptions import (
_default_port, RuntimeFailure,
)] is_multi_cancelled,
)
logger = log.get_logger('tractor') logger = log.get_logger('tractor')
# TODO: stick this in a `@acm` defined in `devx._debug`?
# -[ ] also maybe consider making this a `wrapt`-deco to
# save an indent level?
#
@acm
async def maybe_block_bp(
debug_mode: bool,
maybe_enable_greenback: bool,
) -> bool:
# Override the global debugger hook to make it play nice with
# ``trio``, see much discussion in:
# https://github.com/python-trio/trio/issues/1155#issuecomment-742964018
builtin_bp_handler: Callable = sys.breakpointhook
orig_bp_path: str|None = os.environ.get(
'PYTHONBREAKPOINT',
None,
)
bp_blocked: bool
if (
debug_mode
and maybe_enable_greenback
and (
maybe_mod := await _debug.maybe_init_greenback(
raise_not_found=False,
)
)
):
logger.info(
f'Found `greenback` installed @ {maybe_mod}\n'
'Enabling `tractor.pause_from_sync()` support!\n'
)
os.environ['PYTHONBREAKPOINT'] = (
'tractor.devx._debug._sync_pause_from_builtin'
)
_state._runtime_vars['use_greenback'] = True
bp_blocked = False
else:
# TODO: disable `breakpoint()` by default (without
# `greenback`) since it will break any multi-actor
# usage by a clobbered TTY's stdstreams!
def block_bps(*args, **kwargs):
raise RuntimeError(
'Trying to use `breakpoint()` eh?\n\n'
'Welp, `tractor` blocks `breakpoint()` built-in calls by default!\n'
'If you need to use it please install `greenback` and set '
'`debug_mode=True` when opening the runtime '
'(either via `.open_nursery()` or `open_root_actor()`)\n'
)
sys.breakpointhook = block_bps
# lol ok,
# https://docs.python.org/3/library/sys.html#sys.breakpointhook
os.environ['PYTHONBREAKPOINT'] = "0"
bp_blocked = True
try:
yield bp_blocked
finally:
# restore any prior built-in `breakpoint()` hook state
if builtin_bp_handler is not None:
sys.breakpointhook = builtin_bp_handler
if orig_bp_path is not None:
os.environ['PYTHONBREAKPOINT'] = orig_bp_path
else:
# clear env back to having no entry
os.environ.pop('PYTHONBREAKPOINT', None)
@acm @acm
async def open_root_actor( async def open_root_actor(
*, *,
# defaults are above # defaults are above
registry_addrs: list[tuple[str, int]]|None = None, registry_addrs: list[UnwrappedAddress]|None = None,
# defaults are above # defaults are above
arbiter_addr: tuple[str, int]|None = None, arbiter_addr: tuple[UnwrappedAddress]|None = None,
enable_transports: list[
# TODO, this should eventually be the pairs as
# defined by (codec, proto) as on `MsgTransport.
_state.TransportProtocolKey,
]|None = None,
name: str|None = 'root', name: str|None = 'root',
@ -111,55 +194,38 @@ async def open_root_actor(
Runtime init entry point for ``tractor``. Runtime init entry point for ``tractor``.
''' '''
# XXX NEVER allow nested actor-trees!
if already_actor := _state.current_actor(err_on_no_runtime=False):
rtvs: dict[str, Any] = _state._runtime_vars
root_mailbox: list[str, int] = rtvs['_root_mailbox']
registry_addrs: list[list[str, int]] = rtvs['_registry_addrs']
raise RuntimeFailure(
f'A current actor already exists !?\n'
f'({already_actor}\n'
f'\n'
f'You can NOT open a second root actor from within '
f'an existing tree and the current root of this '
f'already exists !!\n'
f'\n'
f'_root_mailbox: {root_mailbox!r}\n'
f'_registry_addrs: {registry_addrs!r}\n'
)
async with maybe_block_bp(
debug_mode=debug_mode,
maybe_enable_greenback=maybe_enable_greenback,
):
if enable_transports is None:
enable_transports: list[str] = _state.current_ipc_protos()
# TODO! support multi-tpts per actor! Bo
assert (
len(enable_transports) == 1
), 'No multi-tpt support yet!'
_debug.hide_runtime_frames() _debug.hide_runtime_frames()
__tracebackhide__: bool = hide_tb __tracebackhide__: bool = hide_tb
# TODO: stick this in a `@cm` defined in `devx._debug`?
#
# Override the global debugger hook to make it play nice with
# ``trio``, see much discussion in:
# https://github.com/python-trio/trio/issues/1155#issuecomment-742964018
builtin_bp_handler: Callable = sys.breakpointhook
orig_bp_path: str|None = os.environ.get(
'PYTHONBREAKPOINT',
None,
)
if (
debug_mode
and maybe_enable_greenback
and (
maybe_mod := await _debug.maybe_init_greenback(
raise_not_found=False,
)
)
):
logger.info(
f'Found `greenback` installed @ {maybe_mod}\n'
'Enabling `tractor.pause_from_sync()` support!\n'
)
os.environ['PYTHONBREAKPOINT'] = (
'tractor.devx._debug._sync_pause_from_builtin'
)
_state._runtime_vars['use_greenback'] = True
else:
# TODO: disable `breakpoint()` by default (without
# `greenback`) since it will break any multi-actor
# usage by a clobbered TTY's stdstreams!
def block_bps(*args, **kwargs):
raise RuntimeError(
'Trying to use `breakpoint()` eh?\n\n'
'Welp, `tractor` blocks `breakpoint()` built-in calls by default!\n'
'If you need to use it please install `greenback` and set '
'`debug_mode=True` when opening the runtime '
'(either via `.open_nursery()` or `open_root_actor()`)\n'
)
sys.breakpointhook = block_bps
# lol ok,
# https://docs.python.org/3/library/sys.html#sys.breakpointhook
os.environ['PYTHONBREAKPOINT'] = "0"
# attempt to retreive ``trio``'s sigint handler and stash it # attempt to retreive ``trio``'s sigint handler and stash it
# on our debugger lock state. # on our debugger lock state.
_debug.DebugStatus._trio_handler = signal.getsignal(signal.SIGINT) _debug.DebugStatus._trio_handler = signal.getsignal(signal.SIGINT)
@ -186,6 +252,7 @@ async def open_root_actor(
if start_method is not None: if start_method is not None:
_spawn.try_set_start_method(start_method) _spawn.try_set_start_method(start_method)
# TODO! remove this ASAP!
if arbiter_addr is not None: if arbiter_addr is not None:
warnings.warn( warnings.warn(
'`arbiter_addr` is now deprecated\n' '`arbiter_addr` is now deprecated\n'
@ -195,11 +262,11 @@ async def open_root_actor(
) )
registry_addrs = [arbiter_addr] registry_addrs = [arbiter_addr]
registry_addrs: list[tuple[str, int]] = ( if not registry_addrs:
registry_addrs registry_addrs: list[UnwrappedAddress] = default_lo_addrs(
or enable_transports
_default_lo_addrs
) )
assert registry_addrs assert registry_addrs
loglevel = ( loglevel = (
@ -248,10 +315,10 @@ async def open_root_actor(
enable_stack_on_sig() enable_stack_on_sig()
# closed into below ping task-func # closed into below ping task-func
ponged_addrs: list[tuple[str, int]] = [] ponged_addrs: list[UnwrappedAddress] = []
async def ping_tpt_socket( async def ping_tpt_socket(
addr: tuple[str, int], addr: UnwrappedAddress,
timeout: float = 1, timeout: float = 1,
) -> None: ) -> None:
''' '''
@ -271,7 +338,7 @@ async def open_root_actor(
# be better to eventually have a "discovery" protocol # be better to eventually have a "discovery" protocol
# with basic handshake instead? # with basic handshake instead?
with trio.move_on_after(timeout): with trio.move_on_after(timeout):
async with _connect_chan(*addr): async with _connect_chan(addr):
ponged_addrs.append(addr) ponged_addrs.append(addr)
except OSError: except OSError:
@ -284,10 +351,10 @@ async def open_root_actor(
for addr in registry_addrs: for addr in registry_addrs:
tn.start_soon( tn.start_soon(
ping_tpt_socket, ping_tpt_socket,
tuple(addr), # TODO: just drop this requirement? addr,
) )
trans_bind_addrs: list[tuple[str, int]] = [] trans_bind_addrs: list[UnwrappedAddress] = []
# Create a new local root-actor instance which IS NOT THE # Create a new local root-actor instance which IS NOT THE
# REGISTRAR # REGISTRAR
@ -305,15 +372,18 @@ async def open_root_actor(
actor = Actor( actor = Actor(
name=name or 'anonymous', name=name or 'anonymous',
uuid=mk_uuid(),
registry_addrs=ponged_addrs, registry_addrs=ponged_addrs,
loglevel=loglevel, loglevel=loglevel,
enable_modules=enable_modules, enable_modules=enable_modules,
) )
# DO NOT use the registry_addrs as the transport server # DO NOT use the registry_addrs as the transport server
# addrs for this new non-registar, root-actor. # addrs for this new non-registar, root-actor.
for host, port in ponged_addrs: for addr in ponged_addrs:
# NOTE: zero triggers dynamic OS port allocation waddr: Address = wrap_address(addr)
trans_bind_addrs.append((host, 0)) trans_bind_addrs.append(
waddr.get_random(bindspace=waddr.bindspace)
)
# Start this local actor as the "registrar", aka a regular # Start this local actor as the "registrar", aka a regular
# actor who manages the local registry of "mailboxes" of # actor who manages the local registry of "mailboxes" of
@ -322,7 +392,7 @@ async def open_root_actor(
# NOTE that if the current actor IS THE REGISTAR, the # NOTE that if the current actor IS THE REGISTAR, the
# following init steps are taken: # following init steps are taken:
# - the tranport layer server is bound to each (host, port) # - the tranport layer server is bound to each addr
# pair defined in provided registry_addrs, or the default. # pair defined in provided registry_addrs, or the default.
trans_bind_addrs = registry_addrs trans_bind_addrs = registry_addrs
@ -336,7 +406,8 @@ async def open_root_actor(
# https://github.com/goodboy/tractor/issues/296 # https://github.com/goodboy/tractor/issues/296
actor = Arbiter( actor = Arbiter(
name or 'registrar', name=name or 'registrar',
uuid=mk_uuid(),
registry_addrs=registry_addrs, registry_addrs=registry_addrs,
loglevel=loglevel, loglevel=loglevel,
enable_modules=enable_modules, enable_modules=enable_modules,
@ -414,7 +485,11 @@ async def open_root_actor(
err, err,
) )
): ):
logger.exception('Root actor crashed\n') logger.exception(
'Root actor crashed\n'
f'>x)\n'
f' |_{actor}\n'
)
# ALWAYS re-raise any error bubbled up from the # ALWAYS re-raise any error bubbled up from the
# runtime! # runtime!
@ -431,30 +506,19 @@ async def open_root_actor(
# tempn.start_soon(an.exited.wait) # tempn.start_soon(an.exited.wait)
logger.info( logger.info(
'Closing down root actor' f'Closing down root actor\n'
f'>)\n'
f'|_{actor}\n'
) )
await actor.cancel(None) # self cancel await actor.cancel(None) # self cancel
finally: finally:
_state._current_actor = None _state._current_actor = None
_state._last_actor_terminated = actor _state._last_actor_terminated = actor
logger.runtime(
# restore built-in `breakpoint()` hook state f'Root actor terminated\n'
if ( f')>\n'
debug_mode f' |_{actor}\n'
and )
maybe_enable_greenback
):
if builtin_bp_handler is not None:
sys.breakpointhook = builtin_bp_handler
if orig_bp_path is not None:
os.environ['PYTHONBREAKPOINT'] = orig_bp_path
else:
# clear env back to having no entry
os.environ.pop('PYTHONBREAKPOINT', None)
logger.runtime("Root actor terminated")
def run_daemon( def run_daemon(
@ -462,7 +526,7 @@ def run_daemon(
# runtime kwargs # runtime kwargs
name: str | None = 'root', name: str | None = 'root',
registry_addrs: list[tuple[str, int]] = _default_lo_addrs, registry_addrs: list[UnwrappedAddress]|None = None,
start_method: str | None = None, start_method: str | None = None,
debug_mode: bool = False, debug_mode: bool = False,

View File

@ -1156,7 +1156,7 @@ async def process_messages(
trio.Event(), trio.Event(),
) )
# runtime-scoped remote (internal) error # XXX RUNTIME-SCOPED! remote (likely internal) error
# (^- bc no `Error.cid` -^) # (^- bc no `Error.cid` -^)
# #
# NOTE: this is the non-rpc error case, that # NOTE: this is the non-rpc error case, that
@ -1219,8 +1219,10 @@ async def process_messages(
# -[ ] figure out how this will break with other transports? # -[ ] figure out how this will break with other transports?
tc.report_n_maybe_raise( tc.report_n_maybe_raise(
message=( message=(
f'peer IPC channel closed abruptly?\n\n' f'peer IPC channel closed abruptly?\n'
f'<=x {chan}\n' f'\n'
f'<=x[\n'
f' {chan}\n'
f' |_{chan.raddr}\n\n' f' |_{chan.raddr}\n\n'
) )
+ +

File diff suppressed because it is too large Load Diff

View File

@ -46,19 +46,23 @@ from tractor._state import (
_runtime_vars, _runtime_vars,
) )
from tractor.log import get_logger from tractor.log import get_logger
from tractor._addr import UnwrappedAddress
from tractor._portal import Portal from tractor._portal import Portal
from tractor._runtime import Actor from tractor._runtime import Actor
from tractor._entry import _mp_main from tractor._entry import _mp_main
from tractor._exceptions import ActorFailure from tractor._exceptions import ActorFailure
from tractor.msg.types import ( from tractor.msg.types import (
Aid,
SpawnSpec, SpawnSpec,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from ipc import IPCServer
from ._supervise import ActorNursery from ._supervise import ActorNursery
ProcessType = TypeVar('ProcessType', mp.Process, trio.Process) ProcessType = TypeVar('ProcessType', mp.Process, trio.Process)
log = get_logger('tractor') log = get_logger('tractor')
# placeholder for an mp start context if so using that backend # placeholder for an mp start context if so using that backend
@ -163,7 +167,7 @@ async def exhaust_portal(
# TODO: merge with above? # TODO: merge with above?
log.warning( log.warning(
'Cancelled portal result waiter task:\n' 'Cancelled portal result waiter task:\n'
f'uid: {portal.channel.uid}\n' f'uid: {portal.channel.aid}\n'
f'error: {err}\n' f'error: {err}\n'
) )
return err return err
@ -171,7 +175,7 @@ async def exhaust_portal(
else: else:
log.debug( log.debug(
f'Returning final result from portal:\n' f'Returning final result from portal:\n'
f'uid: {portal.channel.uid}\n' f'uid: {portal.channel.aid}\n'
f'result: {final}\n' f'result: {final}\n'
) )
return final return final
@ -324,12 +328,12 @@ async def soft_kill(
see `.hard_kill()`). see `.hard_kill()`).
''' '''
uid: tuple[str, str] = portal.channel.uid peer_aid: Aid = portal.channel.aid
try: try:
log.cancel( log.cancel(
f'Soft killing sub-actor via portal request\n' f'Soft killing sub-actor via portal request\n'
f'\n' f'\n'
f'(c=> {portal.chan.uid}\n' f'(c=> {peer_aid}\n'
f' |_{proc}\n' f' |_{proc}\n'
) )
# wait on sub-proc to signal termination # wait on sub-proc to signal termination
@ -378,7 +382,7 @@ async def soft_kill(
if proc.poll() is None: # type: ignore if proc.poll() is None: # type: ignore
log.warning( log.warning(
'Subactor still alive after cancel request?\n\n' 'Subactor still alive after cancel request?\n\n'
f'uid: {uid}\n' f'uid: {peer_aid}\n'
f'|_{proc}\n' f'|_{proc}\n'
) )
n.cancel_scope.cancel() n.cancel_scope.cancel()
@ -392,8 +396,8 @@ async def new_proc(
errors: dict[tuple[str, str], Exception], errors: dict[tuple[str, str], Exception],
# passed through to actor main # passed through to actor main
bind_addrs: list[tuple[str, int]], bind_addrs: list[UnwrappedAddress],
parent_addr: tuple[str, int], parent_addr: UnwrappedAddress,
_runtime_vars: dict[str, Any], # serialized and sent to _child _runtime_vars: dict[str, Any], # serialized and sent to _child
*, *,
@ -431,8 +435,8 @@ async def trio_proc(
errors: dict[tuple[str, str], Exception], errors: dict[tuple[str, str], Exception],
# passed through to actor main # passed through to actor main
bind_addrs: list[tuple[str, int]], bind_addrs: list[UnwrappedAddress],
parent_addr: tuple[str, int], parent_addr: UnwrappedAddress,
_runtime_vars: dict[str, Any], # serialized and sent to _child _runtime_vars: dict[str, Any], # serialized and sent to _child
*, *,
infect_asyncio: bool = False, infect_asyncio: bool = False,
@ -459,6 +463,9 @@ async def trio_proc(
# the OS; it otherwise can be passed via the parent channel if # the OS; it otherwise can be passed via the parent channel if
# we prefer in the future (for privacy). # we prefer in the future (for privacy).
"--uid", "--uid",
# TODO, how to pass this over "wire" encodings like
# cmdline args?
# -[ ] maybe we can add an `Aid.min_tuple()` ?
str(subactor.uid), str(subactor.uid),
# Address the child must connect to on startup # Address the child must connect to on startup
"--parent_addr", "--parent_addr",
@ -476,6 +483,7 @@ async def trio_proc(
cancelled_during_spawn: bool = False cancelled_during_spawn: bool = False
proc: trio.Process|None = None proc: trio.Process|None = None
ipc_server: IPCServer = actor_nursery._actor.ipc_server
try: try:
try: try:
proc: trio.Process = await trio.lowlevel.open_process(spawn_cmd, **proc_kwargs) proc: trio.Process = await trio.lowlevel.open_process(spawn_cmd, **proc_kwargs)
@ -487,7 +495,7 @@ async def trio_proc(
# wait for actor to spawn and connect back to us # wait for actor to spawn and connect back to us
# channel should have handshake completed by the # channel should have handshake completed by the
# local actor by the time we get a ref to it # local actor by the time we get a ref to it
event, chan = await actor_nursery._actor.wait_for_peer( event, chan = await ipc_server.wait_for_peer(
subactor.uid subactor.uid
) )
@ -520,15 +528,15 @@ async def trio_proc(
# send a "spawning specification" which configures the # send a "spawning specification" which configures the
# initial runtime state of the child. # initial runtime state of the child.
await chan.send( sspec = SpawnSpec(
SpawnSpec(
_parent_main_data=subactor._parent_main_data, _parent_main_data=subactor._parent_main_data,
enable_modules=subactor.enable_modules, enable_modules=subactor.enable_modules,
reg_addrs=subactor.reg_addrs, reg_addrs=subactor.reg_addrs,
bind_addrs=bind_addrs, bind_addrs=bind_addrs,
_runtime_vars=_runtime_vars, _runtime_vars=_runtime_vars,
) )
) log.runtime(f'Sending spawn spec: {str(sspec)}')
await chan.send(sspec)
# track subactor in current nursery # track subactor in current nursery
curr_actor: Actor = current_actor() curr_actor: Actor = current_actor()
@ -638,8 +646,8 @@ async def mp_proc(
subactor: Actor, subactor: Actor,
errors: dict[tuple[str, str], Exception], errors: dict[tuple[str, str], Exception],
# passed through to actor main # passed through to actor main
bind_addrs: list[tuple[str, int]], bind_addrs: list[UnwrappedAddress],
parent_addr: tuple[str, int], parent_addr: UnwrappedAddress,
_runtime_vars: dict[str, Any], # serialized and sent to _child _runtime_vars: dict[str, Any], # serialized and sent to _child
*, *,
infect_asyncio: bool = False, infect_asyncio: bool = False,
@ -719,12 +727,14 @@ async def mp_proc(
log.runtime(f"Started {proc}") log.runtime(f"Started {proc}")
ipc_server: IPCServer = actor_nursery._actor.ipc_server
try: try:
# wait for actor to spawn and connect back to us # wait for actor to spawn and connect back to us
# channel should have handshake completed by the # channel should have handshake completed by the
# local actor by the time we get a ref to it # local actor by the time we get a ref to it
event, chan = await actor_nursery._actor.wait_for_peer( event, chan = await ipc_server.wait_for_peer(
subactor.uid) subactor.uid,
)
# XXX: monkey patch poll API to match the ``subprocess`` API.. # XXX: monkey patch poll API to match the ``subprocess`` API..
# not sure why they don't expose this but kk. # not sure why they don't expose this but kk.

View File

@ -14,16 +14,19 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
""" '''
Per process state Per actor-process runtime state mgmt APIs.
""" '''
from __future__ import annotations from __future__ import annotations
from contextvars import ( from contextvars import (
ContextVar, ContextVar,
) )
import os
from pathlib import Path
from typing import ( from typing import (
Any, Any,
Literal,
TYPE_CHECKING, TYPE_CHECKING,
) )
@ -99,7 +102,7 @@ def current_actor(
return _current_actor return _current_actor
def is_main_process() -> bool: def is_root_process() -> bool:
''' '''
Bool determining if this actor is running in the top-most process. Bool determining if this actor is running in the top-most process.
@ -108,8 +111,10 @@ def is_main_process() -> bool:
return mp.current_process().name == 'MainProcess' return mp.current_process().name == 'MainProcess'
# TODO, more verby name? is_main_process = is_root_process
def debug_mode() -> bool:
def is_debug_mode() -> bool:
''' '''
Bool determining if "debug mode" is on which enables Bool determining if "debug mode" is on which enables
remote subactor pdb entry on crashes. remote subactor pdb entry on crashes.
@ -118,6 +123,9 @@ def debug_mode() -> bool:
return bool(_runtime_vars['_debug_mode']) return bool(_runtime_vars['_debug_mode'])
debug_mode = is_debug_mode
def is_root_process() -> bool: def is_root_process() -> bool:
return _runtime_vars['_is_root'] return _runtime_vars['_is_root']
@ -143,3 +151,42 @@ def current_ipc_ctx(
f'|_{current_task()}\n' f'|_{current_task()}\n'
) )
return ctx return ctx
# std ODE (mutable) app state location
_rtdir: Path = Path(os.environ['XDG_RUNTIME_DIR'])
def get_rt_dir(
subdir: str = 'tractor'
) -> Path:
'''
Return the user "runtime dir" where most userspace apps stick
their IPC and cache related system util-files; we take hold
of a `'XDG_RUNTIME_DIR'/tractor/` subdir by default.
'''
rtdir: Path = _rtdir / subdir
if not rtdir.is_dir():
rtdir.mkdir()
return rtdir
# default IPC transport protocol settings
TransportProtocolKey = Literal[
'tcp',
'uds',
]
_def_tpt_proto: TransportProtocolKey = 'tcp'
def current_ipc_protos() -> list[str]:
'''
Return the list of IPC transport protocol keys currently
in use by this actor.
The keys are as declared by `MsgTransport` and `Address`
concrete-backend sub-types defined throughout `tractor.ipc`.
'''
return [_def_tpt_proto]

View File

@ -437,22 +437,23 @@ class MsgStream(trio.abc.Channel):
message: str = ( message: str = (
f'Stream self-closed by {this_side!r}-side before EoC from {peer_side!r}\n' f'Stream self-closed by {this_side!r}-side before EoC from {peer_side!r}\n'
# } bc a stream is a "scope"/msging-phase inside an IPC # } bc a stream is a "scope"/msging-phase inside an IPC
f'x}}>\n' f'c}}>\n'
f' |_{self}\n' f' |_{self}\n'
) )
log.cancel(message)
self._eoc = trio.EndOfChannel(message)
if ( if (
(rx_chan := self._rx_chan) (rx_chan := self._rx_chan)
and and
(stats := rx_chan.statistics()).tasks_waiting_receive (stats := rx_chan.statistics()).tasks_waiting_receive
): ):
log.cancel( message += (
f'Msg-stream is closing but there is still reader tasks,\n' f'AND there is still reader tasks,\n'
f'\n'
f'{stats}\n' f'{stats}\n'
) )
log.cancel(message)
self._eoc = trio.EndOfChannel(message)
# ?XXX WAIT, why do we not close the local mem chan `._rx_chan` XXX? # ?XXX WAIT, why do we not close the local mem chan `._rx_chan` XXX?
# => NO, DEFINITELY NOT! <= # => NO, DEFINITELY NOT! <=
# if we're a bi-dir `MsgStream` BECAUSE this same # if we're a bi-dir `MsgStream` BECAUSE this same
@ -595,8 +596,17 @@ class MsgStream(trio.abc.Channel):
trio.ClosedResourceError, trio.ClosedResourceError,
trio.BrokenResourceError, trio.BrokenResourceError,
BrokenPipeError, BrokenPipeError,
) as trans_err: ) as _trans_err:
if hide_tb: trans_err = _trans_err
if (
hide_tb
and
self._ctx.chan._exc is trans_err
# ^XXX, IOW, only if the channel is marked errored
# for the same reason as whatever its underlying
# transport raised, do we keep the full low-level tb
# suppressed from the user.
):
raise type(trans_err)( raise type(trans_err)(
*trans_err.args *trans_err.args
) from trans_err ) from trans_err
@ -802,13 +812,12 @@ async def open_stream_from_ctx(
# sanity, can remove? # sanity, can remove?
assert eoc is stream._eoc assert eoc is stream._eoc
log.warning( log.runtime(
'Stream was terminated by EoC\n\n' 'Stream was terminated by EoC\n\n'
# NOTE: won't show the error <Type> but # NOTE: won't show the error <Type> but
# does show txt followed by IPC msg. # does show txt followed by IPC msg.
f'{str(eoc)}\n' f'{str(eoc)}\n'
) )
finally: finally:
if ctx._portal: if ctx._portal:
try: try:

View File

@ -22,13 +22,20 @@ from contextlib import asynccontextmanager as acm
from functools import partial from functools import partial
import inspect import inspect
from pprint import pformat from pprint import pformat
from typing import TYPE_CHECKING from typing import (
TYPE_CHECKING,
)
import typing import typing
import warnings import warnings
import trio import trio
from .devx._debug import maybe_wait_for_debugger from .devx._debug import maybe_wait_for_debugger
from ._addr import (
UnwrappedAddress,
mk_uuid,
)
from ._state import current_actor, is_main_process from ._state import current_actor, is_main_process
from .log import get_logger, get_loglevel from .log import get_logger, get_loglevel
from ._runtime import Actor from ._runtime import Actor
@ -37,18 +44,21 @@ from ._exceptions import (
is_multi_cancelled, is_multi_cancelled,
ContextCancelled, ContextCancelled,
) )
from ._root import open_root_actor from ._root import (
open_root_actor,
)
from . import _state from . import _state
from . import _spawn from . import _spawn
if TYPE_CHECKING: if TYPE_CHECKING:
import multiprocessing as mp import multiprocessing as mp
# from .ipc._server import IPCServer
from .ipc import IPCServer
log = get_logger(__name__) log = get_logger(__name__)
_default_bind_addr: tuple[str, int] = ('127.0.0.1', 0)
class ActorNursery: class ActorNursery:
''' '''
@ -130,8 +140,9 @@ class ActorNursery:
*, *,
bind_addrs: list[tuple[str, int]] = [_default_bind_addr], bind_addrs: list[UnwrappedAddress]|None = None,
rpc_module_paths: list[str]|None = None, rpc_module_paths: list[str]|None = None,
enable_transports: list[str] = [_state._def_tpt_proto],
enable_modules: list[str]|None = None, enable_modules: list[str]|None = None,
loglevel: str|None = None, # set log level per subactor loglevel: str|None = None, # set log level per subactor
debug_mode: bool|None = None, debug_mode: bool|None = None,
@ -178,7 +189,9 @@ class ActorNursery:
enable_modules.extend(rpc_module_paths) enable_modules.extend(rpc_module_paths)
subactor = Actor( subactor = Actor(
name, name=name,
uuid=mk_uuid(),
# modules allowed to invoked funcs from # modules allowed to invoked funcs from
enable_modules=enable_modules, enable_modules=enable_modules,
loglevel=loglevel, loglevel=loglevel,
@ -186,7 +199,7 @@ class ActorNursery:
# verbatim relay this actor's registrar addresses # verbatim relay this actor's registrar addresses
registry_addrs=current_actor().reg_addrs, registry_addrs=current_actor().reg_addrs,
) )
parent_addr = self._actor.accept_addr parent_addr: UnwrappedAddress = self._actor.accept_addr
assert parent_addr assert parent_addr
# start a task to spawn a process # start a task to spawn a process
@ -224,7 +237,7 @@ class ActorNursery:
*, *,
name: str | None = None, name: str | None = None,
bind_addrs: tuple[str, int] = [_default_bind_addr], bind_addrs: UnwrappedAddress|None = None,
rpc_module_paths: list[str] | None = None, rpc_module_paths: list[str] | None = None,
enable_modules: list[str] | None = None, enable_modules: list[str] | None = None,
loglevel: str | None = None, # set log level per subactor loglevel: str | None = None, # set log level per subactor
@ -305,8 +318,13 @@ class ActorNursery:
children: dict = self._children children: dict = self._children
child_count: int = len(children) child_count: int = len(children)
msg: str = f'Cancelling actor nursery with {child_count} children\n' msg: str = f'Cancelling actor nursery with {child_count} children\n'
server: IPCServer = self._actor.ipc_server
with trio.move_on_after(3) as cs: with trio.move_on_after(3) as cs:
async with trio.open_nursery() as tn: async with trio.open_nursery(
strict_exception_groups=False,
) as tn:
subactor: Actor subactor: Actor
proc: trio.Process proc: trio.Process
@ -325,7 +343,7 @@ class ActorNursery:
else: else:
if portal is None: # actor hasn't fully spawned yet if portal is None: # actor hasn't fully spawned yet
event = self._actor._peer_connected[subactor.uid] event: trio.Event = server._peer_connected[subactor.uid]
log.warning( log.warning(
f"{subactor.uid} never 't finished spawning?" f"{subactor.uid} never 't finished spawning?"
) )
@ -341,7 +359,7 @@ class ActorNursery:
if portal is None: if portal is None:
# cancelled while waiting on the event # cancelled while waiting on the event
# to arrive # to arrive
chan = self._actor._peers[subactor.uid][-1] chan = server._peers[subactor.uid][-1]
if chan: if chan:
portal = Portal(chan) portal = Portal(chan)
else: # there's no other choice left else: # there's no other choice left

View File

@ -1,35 +1,99 @@
import os import hashlib
import random import numpy as np
def generate_sample_messages( def generate_single_byte_msgs(amount: int) -> bytes:
'''
Generate a byte instance of length `amount` with repeating ASCII digits 0..9.
'''
# array [0, 1, 2, ..., amount-1], take mod 10 => [0..9], and map 0->'0'(48)
# up to 9->'9'(57).
arr = np.arange(amount, dtype=np.uint8) % 10
# move into ascii space
arr += 48
return arr.tobytes()
class RandomBytesGenerator:
'''
Generate bytes msgs for tests.
messages will have the following format:
b'[{i:08}]' + random_bytes
so for message index 25:
b'[00000025]' + random_bytes
also generates sha256 hash of msgs.
'''
def __init__(
self,
amount: int, amount: int,
rand_min: int = 0, rand_min: int = 0,
rand_max: int = 0, rand_max: int = 0
silent: bool = False ):
) -> tuple[list[bytes], int]: if rand_max < rand_min:
raise ValueError('rand_max must be >= rand_min')
msgs = [] self._amount = amount
size = 0 self._rand_min = rand_min
self._rand_max = rand_max
self._index = 0
self._hasher = hashlib.sha256()
self._total_bytes = 0
if not silent: self._lengths = np.random.randint(
print(f'\ngenerating {amount} messages...') rand_min,
rand_max + 1,
size=amount,
dtype=np.int32
)
for i in range(amount): def __iter__(self):
msg = f'[{i:08}]'.encode('utf-8') return self
if rand_max > 0: def __next__(self) -> bytes:
msg += os.urandom( if self._index == self._amount:
random.randint(rand_min, rand_max)) raise StopIteration
size += len(msg) header = f'[{self._index:08}]'.encode('utf-8')
msgs.append(msg) length = int(self._lengths[self._index])
msg = header + np.random.bytes(length)
if not silent and i and i % 10_000 == 0: self._hasher.update(msg)
print(f'{i} generated') self._total_bytes += length
self._index += 1
if not silent: return msg
print(f'done, {size:,} bytes in total')
return msgs, size @property
def hexdigest(self) -> str:
return self._hasher.hexdigest()
@property
def total_bytes(self) -> int:
return self._total_bytes
@property
def total_msgs(self) -> int:
return self._amount
@property
def msgs_generated(self) -> int:
return self._index
@property
def recommended_log_interval(self) -> int:
max_msg_size = 10 + self._rand_max
if max_msg_size <= 32 * 1024:
return 10_000
else:
return 1000

View File

@ -73,6 +73,7 @@ from tractor.log import get_logger
from tractor._context import Context from tractor._context import Context
from tractor import _state from tractor import _state
from tractor._exceptions import ( from tractor._exceptions import (
DebugRequestError,
InternalError, InternalError,
NoRuntime, NoRuntime,
is_multi_cancelled, is_multi_cancelled,
@ -91,7 +92,11 @@ from tractor._state import (
if TYPE_CHECKING: if TYPE_CHECKING:
from trio.lowlevel import Task from trio.lowlevel import Task
from threading import Thread from threading import Thread
from tractor.ipc import Channel from tractor.ipc import (
Channel,
IPCServer,
# _server, # TODO? export at top level?
)
from tractor._runtime import ( from tractor._runtime import (
Actor, Actor,
) )
@ -1433,6 +1438,7 @@ def any_connected_locker_child() -> bool:
''' '''
actor: Actor = current_actor() actor: Actor = current_actor()
server: IPCServer = actor.ipc_server
if not is_root_process(): if not is_root_process():
raise InternalError('This is a root-actor only API!') raise InternalError('This is a root-actor only API!')
@ -1442,7 +1448,7 @@ def any_connected_locker_child() -> bool:
and and
(uid_in_debug := ctx.chan.uid) (uid_in_debug := ctx.chan.uid)
): ):
chans: list[tractor.Channel] = actor._peers.get( chans: list[tractor.Channel] = server._peers.get(
tuple(uid_in_debug) tuple(uid_in_debug)
) )
if chans: if chans:
@ -1740,13 +1746,6 @@ def sigint_shield(
_pause_msg: str = 'Opening a pdb REPL in paused actor' _pause_msg: str = 'Opening a pdb REPL in paused actor'
class DebugRequestError(RuntimeError):
'''
Failed to request stdio lock from root actor!
'''
_repl_fail_msg: str|None = ( _repl_fail_msg: str|None = (
'Failed to REPl via `_pause()` ' 'Failed to REPl via `_pause()` '
) )
@ -3009,6 +3008,7 @@ async def _maybe_enter_pm(
[BaseException|BaseExceptionGroup], [BaseException|BaseExceptionGroup],
bool, bool,
] = lambda err: not is_multi_cancelled(err), ] = lambda err: not is_multi_cancelled(err),
**_pause_kws,
): ):
if ( if (
@ -3035,6 +3035,7 @@ async def _maybe_enter_pm(
await post_mortem( await post_mortem(
api_frame=api_frame, api_frame=api_frame,
tb=tb, tb=tb,
**_pause_kws,
) )
return True return True

View File

@ -19,6 +19,7 @@ Pretty formatters for use throughout the code base.
Mostly handy for logging and exception message content. Mostly handy for logging and exception message content.
''' '''
import sys
import textwrap import textwrap
import traceback import traceback
@ -115,6 +116,85 @@ def pformat_boxed_tb(
) )
def pformat_exc(
exc: Exception,
header: str = '',
message: str = '',
body: str = '',
with_type_header: bool = True,
) -> str:
# XXX when the currently raised exception is this instance,
# we do not ever use the "type header" style repr.
is_being_raised: bool = False
if (
(curr_exc := sys.exception())
and
curr_exc is exc
):
is_being_raised: bool = True
with_type_header: bool = (
with_type_header
and
not is_being_raised
)
# <RemoteActorError( .. )> style
if (
with_type_header
and
not header
):
header: str = f'<{type(exc).__name__}('
message: str = (
message
or
exc.message
)
if message:
# split off the first line so, if needed, it isn't
# indented the same like the "boxed content" which
# since there is no `.tb_str` is just the `.message`.
lines: list[str] = message.splitlines()
first: str = lines[0]
message: str = message.removeprefix(first)
# with a type-style header we,
# - have no special message "first line" extraction/handling
# - place the message a space in from the header:
# `MsgTypeError( <message> ..`
# ^-here
# - indent the `.message` inside the type body.
if with_type_header:
first = f' {first} )>'
message: str = textwrap.indent(
message,
prefix=' '*2,
)
message: str = first + message
tail: str = ''
if (
with_type_header
and
not message
):
tail: str = '>'
return (
header
+
message
+
f'{body}'
+
tail
)
def pformat_caller_frame( def pformat_caller_frame(
stack_limit: int = 1, stack_limit: int = 1,
box_tb: bool = True, box_tb: bool = True,

View File

@ -45,6 +45,8 @@ __all__ = ['pub']
log = get_logger('messaging') log = get_logger('messaging')
# TODO! this needs to reworked to use the modern
# `Context`/`MsgStream` APIs!!
async def fan_out_to_ctxs( async def fan_out_to_ctxs(
pub_async_gen_func: typing.Callable, # it's an async gen ... gd mypy pub_async_gen_func: typing.Callable, # it's an async gen ... gd mypy
topics2ctxs: dict[str, list], topics2ctxs: dict[str, list],

View File

@ -13,38 +13,11 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
A modular IPC layer supporting the power of cross-process SC!
'''
import platform
from ._transport import MsgTransport as MsgTransport
from ._tcp import (
get_stream_addrs as get_stream_addrs,
MsgpackTCPStream as MsgpackTCPStream
)
from ._chan import ( from ._chan import (
_connect_chan as _connect_chan, _connect_chan as _connect_chan,
get_msg_transport as get_msg_transport,
Channel as Channel Channel as Channel
) )
if platform.system() == 'Linux':
from ._linux import (
EFD_SEMAPHORE as EFD_SEMAPHORE,
EFD_CLOEXEC as EFD_CLOEXEC,
EFD_NONBLOCK as EFD_NONBLOCK,
open_eventfd as open_eventfd,
write_eventfd as write_eventfd,
read_eventfd as read_eventfd,
close_eventfd as close_eventfd,
EventFD as EventFD,
)
from ._ringbuf import (
RBToken as RBToken,
RingBuffSender as RingBuffSender,
RingBuffReceiver as RingBuffReceiver,
open_ringbuf as open_ringbuf
)

View File

@ -29,22 +29,35 @@ from pprint import pformat
import typing import typing
from typing import ( from typing import (
Any, Any,
Type TYPE_CHECKING,
) )
import warnings
import trio import trio
from tractor.ipc._transport import MsgTransport from ._types import (
from tractor.ipc._tcp import ( transport_from_addr,
MsgpackTCPStream, transport_from_stream,
get_stream_addrs )
from tractor._addr import (
is_wrapped_addr,
wrap_address,
Address,
UnwrappedAddress,
) )
from tractor.log import get_logger from tractor.log import get_logger
from tractor._exceptions import ( from tractor._exceptions import (
MsgTypeError, MsgTypeError,
pack_from_raise, pack_from_raise,
TransportClosed,
) )
from tractor.msg import MsgCodec from tractor.msg import (
Aid,
MsgCodec,
)
if TYPE_CHECKING:
from ._transport import MsgTransport
log = get_logger(__name__) log = get_logger(__name__)
@ -52,17 +65,6 @@ log = get_logger(__name__)
_is_windows = platform.system() == 'Windows' _is_windows = platform.system() == 'Windows'
def get_msg_transport(
key: tuple[str, str],
) -> Type[MsgTransport]:
return {
('msgpack', 'tcp'): MsgpackTCPStream,
}[key]
class Channel: class Channel:
''' '''
An inter-process channel for communication between (remote) actors. An inter-process channel for communication between (remote) actors.
@ -77,10 +79,7 @@ class Channel:
def __init__( def __init__(
self, self,
destaddr: tuple[str, int]|None, transport: MsgTransport|None = None,
msg_transport_type_key: tuple[str, str] = ('msgpack', 'tcp'),
# TODO: optional reconnection support? # TODO: optional reconnection support?
# auto_reconnect: bool = False, # auto_reconnect: bool = False,
# on_reconnect: typing.Callable[..., typing.Awaitable] = None, # on_reconnect: typing.Callable[..., typing.Awaitable] = None,
@ -90,19 +89,16 @@ class Channel:
# self._recon_seq = on_reconnect # self._recon_seq = on_reconnect
# self._autorecon = auto_reconnect # self._autorecon = auto_reconnect
self._destaddr = destaddr
self._transport_key = msg_transport_type_key
# Either created in ``.connect()`` or passed in by # Either created in ``.connect()`` or passed in by
# user in ``.from_stream()``. # user in ``.from_stream()``.
self._stream: trio.SocketStream|None = None self._transport: MsgTransport|None = transport
self._transport: MsgTransport|None = None
# set after handshake - always uid of far end # set after handshake - always info from peer end
self.uid: tuple[str, str]|None = None self.aid: Aid|None = None
self._aiter_msgs = self._iter_msgs() self._aiter_msgs = self._iter_msgs()
self._exc: Exception|None = None # set if far end actor errors self._exc: Exception|None = None
# ^XXX! ONLY set if a remote actor sends an `Error`-msg
self._closed: bool = False self._closed: bool = False
# flag set by ``Portal.cancel_actor()`` indicating remote # flag set by ``Portal.cancel_actor()`` indicating remote
@ -110,6 +106,33 @@ class Channel:
# runtime. # runtime.
self._cancel_called: bool = False self._cancel_called: bool = False
@property
def uid(self) -> tuple[str, str]:
'''
Peer actor's unique id.
'''
msg: str = (
f'`{type(self).__name__}.uid` is now deprecated.\n'
'Use the new `.aid: tractor.msg.Aid` (struct) instead '
'which also provides additional named (optional) fields '
'beyond just the `.name` and `.uuid`.'
)
warnings.warn(
msg,
DeprecationWarning,
stacklevel=2,
)
peer_aid: Aid = self.aid
return (
peer_aid.name,
peer_aid.uuid,
)
@property
def stream(self) -> trio.abc.Stream | None:
return self._transport.stream if self._transport else None
@property @property
def msgstream(self) -> MsgTransport: def msgstream(self) -> MsgTransport:
log.info( log.info(
@ -124,52 +147,41 @@ class Channel:
@classmethod @classmethod
def from_stream( def from_stream(
cls, cls,
stream: trio.SocketStream, stream: trio.abc.Stream,
**kwargs, ) -> Channel:
transport_cls = transport_from_stream(stream)
return Channel(
transport=transport_cls(stream)
)
@classmethod
async def from_addr(
cls,
addr: UnwrappedAddress,
**kwargs
) -> Channel: ) -> Channel:
src, dst = get_stream_addrs(stream) if not is_wrapped_addr(addr):
chan = Channel( addr: Address = wrap_address(addr)
destaddr=dst,
transport_cls = transport_from_addr(addr)
transport = await transport_cls.connect_to(
addr,
**kwargs, **kwargs,
) )
assert transport.raddr == addr
# set immediately here from provided instance chan = Channel(transport=transport)
chan._stream: trio.SocketStream = stream log.runtime(
chan.set_msg_transport(stream) f'Connected channel IPC transport\n'
f'[>\n'
f' |_{chan}\n'
)
return chan return chan
def set_msg_transport(
self,
stream: trio.SocketStream,
type_key: tuple[str, str]|None = None,
# XXX optionally provided codec pair for `msgspec`:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
codec: MsgCodec|None = None,
) -> MsgTransport:
type_key = (
type_key
or
self._transport_key
)
# get transport type, then
self._transport = get_msg_transport(
type_key
# instantiate an instance of the msg-transport
)(
stream,
codec=codec,
)
return self._transport
@cm @cm
def apply_codec( def apply_codec(
self, self,
codec: MsgCodec, codec: MsgCodec,
) -> None: ) -> None:
''' '''
Temporarily override the underlying IPC msg codec for Temporarily override the underlying IPC msg codec for
@ -184,49 +196,57 @@ class Channel:
self._transport.codec = orig self._transport.codec = orig
# TODO: do a .src/.dst: str for maddrs? # TODO: do a .src/.dst: str for maddrs?
def __repr__(self) -> str: def pformat(self) -> str:
if not self._transport: if not self._transport:
return '<Channel with inactive transport?>' return '<Channel with inactive transport?>'
return repr( tpt: MsgTransport = self._transport
self._transport.stream.socket._sock tpt_name: str = type(tpt).__name__
).replace( # type: ignore tpt_status: str = (
"socket.socket", 'connected' if self.connected()
"Channel", else 'closed'
)
return (
f'<Channel(\n'
f' |_status: {tpt_status!r}\n'
f' _closed={self._closed}\n'
f' _cancel_called={self._cancel_called}\n'
f'\n'
f' |_peer: {self.aid}\n'
f'\n'
f' |_msgstream: {tpt_name}\n'
f' proto={tpt.laddr.proto_key!r}\n'
f' layer={tpt.layer_key!r}\n'
f' laddr={tpt.laddr}\n'
f' raddr={tpt.raddr}\n'
f' codec={tpt.codec_key!r}\n'
f' stream={tpt.stream}\n'
f' maddr={tpt.maddr!r}\n'
f' drained={tpt.drained}\n'
f' _send_lock={tpt._send_lock.statistics()}\n'
f')>\n'
) )
# NOTE: making this return a value that can be passed to
# `eval()` is entirely **optional** FYI!
# https://docs.python.org/3/library/functions.html#repr
# https://docs.python.org/3/reference/datamodel.html#object.__repr__
#
# Currently we target **readability** from a (console)
# logging perspective over `eval()`-ability since we do NOT
# target serializing non-struct instances!
# def __repr__(self) -> str:
__str__ = pformat
__repr__ = pformat
@property @property
def laddr(self) -> tuple[str, int]|None: def laddr(self) -> Address|None:
return self._transport.laddr if self._transport else None return self._transport.laddr if self._transport else None
@property @property
def raddr(self) -> tuple[str, int]|None: def raddr(self) -> Address|None:
return self._transport.raddr if self._transport else None return self._transport.raddr if self._transport else None
async def connect(
self,
destaddr: tuple[Any, ...] | None = None,
**kwargs
) -> MsgTransport:
if self.connected():
raise RuntimeError("channel is already connected?")
destaddr = destaddr or self._destaddr
assert isinstance(destaddr, tuple)
stream = await trio.open_tcp_stream(
*destaddr,
**kwargs
)
transport = self.set_msg_transport(stream)
log.transport(
f'Opened channel[{type(transport)}]: {self.laddr} -> {self.raddr}'
)
return transport
# TODO: something like, # TODO: something like,
# `pdbp.hideframe_on(errors=[MsgTypeError])` # `pdbp.hideframe_on(errors=[MsgTypeError])`
# instead of the `try/except` hack we have rn.. # instead of the `try/except` hack we have rn..
@ -237,7 +257,7 @@ class Channel:
self, self,
payload: Any, payload: Any,
hide_tb: bool = False, hide_tb: bool = True,
) -> None: ) -> None:
''' '''
@ -255,14 +275,27 @@ class Channel:
payload, payload,
hide_tb=hide_tb, hide_tb=hide_tb,
) )
except BaseException as _err: except (
BaseException,
MsgTypeError,
TransportClosed,
)as _err:
err = _err # bind for introspection err = _err # bind for introspection
if not isinstance(_err, MsgTypeError): match err:
# assert err case MsgTypeError():
__tracebackhide__: bool = False try:
else:
assert err.cid assert err.cid
except KeyError:
raise err
case TransportClosed():
log.transport(
f'Transport stream closed due to\n'
f'{err.repr_src_exc()}\n'
)
case _:
# never suppress non-tpt sources
__tracebackhide__: bool = False
raise raise
async def recv(self) -> Any: async def recv(self) -> Any:
@ -285,7 +318,7 @@ class Channel:
async def aclose(self) -> None: async def aclose(self) -> None:
log.transport( log.transport(
f'Closing channel to {self.uid} ' f'Closing channel to {self.aid} '
f'{self.laddr} -> {self.raddr}' f'{self.laddr} -> {self.raddr}'
) )
assert self._transport assert self._transport
@ -385,20 +418,40 @@ class Channel:
def connected(self) -> bool: def connected(self) -> bool:
return self._transport.connected() if self._transport else False return self._transport.connected() if self._transport else False
async def _do_handshake(
self,
aid: Aid,
) -> Aid:
'''
Exchange `(name, UUIDs)` identifiers as the first
communication step with any (peer) remote `Actor`.
These are essentially the "mailbox addresses" found in
"actor model" parlance.
'''
await self.send(aid)
peer_aid: Aid = await self.recv()
log.runtime(
f'Received hanshake with peer actor,\n'
f'{peer_aid}\n'
)
# NOTE, we always are referencing the remote peer!
self.aid = peer_aid
return peer_aid
@acm @acm
async def _connect_chan( async def _connect_chan(
host: str, addr: UnwrappedAddress
port: int
) -> typing.AsyncGenerator[Channel, None]: ) -> typing.AsyncGenerator[Channel, None]:
''' '''
Create and connect a channel with disconnect on context manager Create and connect a channel with disconnect on context manager
teardown. teardown.
''' '''
chan = Channel((host, port)) chan = await Channel.from_addr(addr)
await chan.connect()
yield chan yield chan
with trio.CancelScope(shield=True): with trio.CancelScope(shield=True):
await chan.aclose() await chan.aclose()

View File

@ -0,0 +1,163 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
File-descriptor-sharing on `linux` by "wilhelm_of_bohemia".
'''
from __future__ import annotations
import os
import array
import socket
import tempfile
from pathlib import Path
from contextlib import ExitStack
import trio
import tractor
from tractor.ipc import RBToken
actor_name = 'ringd'
_rings: dict[str, dict] = {}
async def _attach_to_ring(
ring_name: str
) -> tuple[int, int, int]:
actor = tractor.current_actor()
fd_amount = 3
sock_path = (
Path(tempfile.gettempdir())
/
f'{os.getpid()}-pass-ring-fds-{ring_name}-to-{actor.name}.sock'
)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind(sock_path)
sock.listen(1)
async with (
tractor.find_actor(actor_name) as ringd,
ringd.open_context(
_pass_fds,
name=ring_name,
sock_path=sock_path
) as (ctx, _sent)
):
# prepare array to receive FD
fds = array.array("i", [0] * fd_amount)
conn, _ = sock.accept()
# receive FD
msg, ancdata, flags, addr = conn.recvmsg(
1024,
socket.CMSG_LEN(fds.itemsize * fd_amount)
)
for (
cmsg_level,
cmsg_type,
cmsg_data,
) in ancdata:
if (
cmsg_level == socket.SOL_SOCKET
and
cmsg_type == socket.SCM_RIGHTS
):
fds.frombytes(cmsg_data[:fds.itemsize * fd_amount])
break
else:
raise RuntimeError("Receiver: No FDs received")
conn.close()
sock.close()
sock_path.unlink()
return RBToken.from_msg(
await ctx.wait_for_result()
)
@tractor.context
async def _pass_fds(
ctx: tractor.Context,
name: str,
sock_path: str
) -> RBToken:
global _rings
token = _rings[name]
client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
client.connect(sock_path)
await ctx.started()
fds = array.array('i', token.fds)
client.sendmsg([b'FDs'], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)])
client.close()
return token
@tractor.context
async def _open_ringbuf(
ctx: tractor.Context,
name: str,
buf_size: int
) -> RBToken:
global _rings
is_owner = False
if name not in _rings:
stack = ExitStack()
token = stack.enter_context(
tractor.open_ringbuf(
name,
buf_size=buf_size
)
)
_rings[name] = {
'token': token,
'stack': stack,
}
is_owner = True
ring = _rings[name]
await ctx.started()
try:
await trio.sleep_forever()
except tractor.ContextCancelled:
...
finally:
if is_owner:
ring['stack'].close()
async def open_ringbuf(
name: str,
buf_size: int
) -> RBToken:
async with (
tractor.find_actor(actor_name) as ringd,
ringd.open_context(
_open_ringbuf,
name=name,
buf_size=buf_size
) as (rd_ctx, _)
):
yield await _attach_to_ring(name)
await rd_ctx.cancel()

View File

@ -1,253 +0,0 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
IPC Reliable RingBuffer implementation
'''
from __future__ import annotations
from contextlib import contextmanager as cm
from multiprocessing.shared_memory import SharedMemory
import trio
from msgspec import (
Struct,
to_builtins
)
from ._linux import (
EFD_NONBLOCK,
open_eventfd,
EventFD
)
from ._mp_bs import disable_mantracker
disable_mantracker()
class RBToken(Struct, frozen=True):
'''
RingBuffer token contains necesary info to open the two
eventfds and the shared memory
'''
shm_name: str
write_eventfd: int
wrap_eventfd: int
buf_size: int
def as_msg(self):
return to_builtins(self)
@classmethod
def from_msg(cls, msg: dict) -> RBToken:
if isinstance(msg, RBToken):
return msg
return RBToken(**msg)
@cm
def open_ringbuf(
shm_name: str,
buf_size: int = 10 * 1024,
write_efd_flags: int = 0,
wrap_efd_flags: int = 0
) -> RBToken:
shm = SharedMemory(
name=shm_name,
size=buf_size,
create=True
)
try:
token = RBToken(
shm_name=shm_name,
write_eventfd=open_eventfd(flags=write_efd_flags),
wrap_eventfd=open_eventfd(flags=wrap_efd_flags),
buf_size=buf_size
)
yield token
finally:
shm.unlink()
class RingBuffSender(trio.abc.SendStream):
'''
IPC Reliable Ring Buffer sender side implementation
`eventfd(2)` is used for wrap around sync, and also to signal
writes to the reader.
'''
def __init__(
self,
token: RBToken,
start_ptr: int = 0,
):
token = RBToken.from_msg(token)
self._shm = SharedMemory(
name=token.shm_name,
size=token.buf_size,
create=False
)
self._write_event = EventFD(token.write_eventfd, 'w')
self._wrap_event = EventFD(token.wrap_eventfd, 'r')
self._ptr = start_ptr
@property
def key(self) -> str:
return self._shm.name
@property
def size(self) -> int:
return self._shm.size
@property
def ptr(self) -> int:
return self._ptr
@property
def write_fd(self) -> int:
return self._write_event.fd
@property
def wrap_fd(self) -> int:
return self._wrap_event.fd
async def send_all(self, data: bytes | bytearray | memoryview):
# while data is larger than the remaining buf
target_ptr = self.ptr + len(data)
while target_ptr > self.size:
# write all bytes that fit
remaining = self.size - self.ptr
self._shm.buf[self.ptr:] = data[:remaining]
# signal write and wait for reader wrap around
self._write_event.write(remaining)
await self._wrap_event.read()
# wrap around and trim already written bytes
self._ptr = 0
data = data[remaining:]
target_ptr = self._ptr + len(data)
# remaining data fits on buffer
self._shm.buf[self.ptr:target_ptr] = data
self._write_event.write(len(data))
self._ptr = target_ptr
async def wait_send_all_might_not_block(self):
raise NotImplementedError
async def aclose(self):
self._write_event.close()
self._wrap_event.close()
self._shm.close()
async def __aenter__(self):
self._write_event.open()
self._wrap_event.open()
return self
class RingBuffReceiver(trio.abc.ReceiveStream):
'''
IPC Reliable Ring Buffer receiver side implementation
`eventfd(2)` is used for wrap around sync, and also to signal
writes to the reader.
'''
def __init__(
self,
token: RBToken,
start_ptr: int = 0,
flags: int = 0
):
token = RBToken.from_msg(token)
self._shm = SharedMemory(
name=token.shm_name,
size=token.buf_size,
create=False
)
self._write_event = EventFD(token.write_eventfd, 'w')
self._wrap_event = EventFD(token.wrap_eventfd, 'r')
self._ptr = start_ptr
self._flags = flags
@property
def key(self) -> str:
return self._shm.name
@property
def size(self) -> int:
return self._shm.size
@property
def ptr(self) -> int:
return self._ptr
@property
def write_fd(self) -> int:
return self._write_event.fd
@property
def wrap_fd(self) -> int:
return self._wrap_event.fd
async def receive_some(
self,
max_bytes: int | None = None,
nb_timeout: float = 0.1
) -> memoryview:
# if non blocking eventfd enabled, do polling
# until next write, this allows signal handling
if self._flags | EFD_NONBLOCK:
delta = None
while delta is None:
try:
delta = await self._write_event.read()
except OSError as e:
if e.errno == 'EAGAIN':
continue
raise e
else:
delta = await self._write_event.read()
# fetch next segment and advance ptr
next_ptr = self._ptr + delta
segment = self._shm.buf[self._ptr:next_ptr]
self._ptr = next_ptr
if self.ptr == self.size:
# reached the end, signal wrap around
self._ptr = 0
self._wrap_event.write(1)
return segment
async def aclose(self):
self._write_event.close()
self._wrap_event.close()
self._shm.close()
async def __aenter__(self):
self._write_event.open()
self._wrap_event.open()
return self

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,834 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Ring buffer ipc publish-subscribe mechanism brokered by ringd
can dynamically add new outputs (publisher) or inputs (subscriber)
'''
from typing import (
TypeVar,
Generic,
Callable,
Awaitable,
AsyncContextManager
)
from functools import partial
from contextlib import asynccontextmanager as acm
from dataclasses import dataclass
import trio
import tractor
from msgspec.msgpack import (
Encoder,
Decoder
)
from tractor.ipc._ringbuf import (
RBToken,
PayloadT,
RingBufferSendChannel,
RingBufferReceiveChannel,
attach_to_ringbuf_sender,
attach_to_ringbuf_receiver
)
from tractor.trionics import (
order_send_channel,
order_receive_channel
)
import tractor.linux._fdshare as fdshare
log = tractor.log.get_logger(__name__)
ChannelType = TypeVar('ChannelType')
@dataclass
class ChannelInfo:
token: RBToken
channel: ChannelType
cancel_scope: trio.CancelScope
teardown: trio.Event
class ChannelManager(Generic[ChannelType]):
'''
Helper for managing channel resources and their handler tasks with
cancellation, add or remove channels dynamically!
'''
def __init__(
self,
# nursery used to spawn channel handler tasks
n: trio.Nursery,
# acm will be used for setup & teardown of channel resources
open_channel_acm: Callable[..., AsyncContextManager[ChannelType]],
# long running bg task to handle channel
channel_task: Callable[..., Awaitable[None]]
):
self._n = n
self._open_channel = open_channel_acm
self._channel_task = channel_task
# signal when a new channel conects and we previously had none
self._connect_event = trio.Event()
# store channel runtime variables
self._channels: list[ChannelInfo] = []
self._is_closed: bool = True
@property
def closed(self) -> bool:
return self._is_closed
@property
def channels(self) -> list[ChannelInfo]:
return self._channels
async def _channel_handler_task(
self,
token: RBToken,
task_status=trio.TASK_STATUS_IGNORED,
**kwargs
):
'''
Open channel resources, add to internal data structures, signal channel
connect through trio.Event, and run `channel_task` with cancel scope,
and finally, maybe remove channel from internal data structures.
Spawned by `add_channel` function, lock is held from begining of fn
until `task_status.started()` call.
kwargs are proxied to `self._open_channel` acm.
'''
async with self._open_channel(
token,
**kwargs
) as chan:
cancel_scope = trio.CancelScope()
info = ChannelInfo(
token=token,
channel=chan,
cancel_scope=cancel_scope,
teardown=trio.Event()
)
self._channels.append(info)
if len(self) == 1:
self._connect_event.set()
task_status.started()
with cancel_scope:
await self._channel_task(info)
self._maybe_destroy_channel(token.shm_name)
def _find_channel(self, name: str) -> tuple[int, ChannelInfo] | None:
'''
Given a channel name maybe return its index and value from
internal _channels list.
Only use after acquiring lock.
'''
for entry in enumerate(self._channels):
i, info = entry
if info.token.shm_name == name:
return entry
return None
def _maybe_destroy_channel(self, name: str):
'''
If channel exists cancel its scope and remove from internal
_channels list.
'''
maybe_entry = self._find_channel(name)
if maybe_entry:
i, info = maybe_entry
info.cancel_scope.cancel()
info.teardown.set()
del self._channels[i]
async def add_channel(
self,
token: RBToken,
**kwargs
):
'''
Add a new channel to be handled
'''
if self.closed:
raise trio.ClosedResourceError
await self._n.start(partial(
self._channel_handler_task,
RBToken.from_msg(token),
**kwargs
))
async def remove_channel(self, name: str):
'''
Remove a channel and stop its handling
'''
if self.closed:
raise trio.ClosedResourceError
maybe_entry = self._find_channel(name)
if not maybe_entry:
# return
raise RuntimeError(
f'tried to remove channel {name} but if does not exist'
)
i, info = maybe_entry
self._maybe_destroy_channel(name)
await info.teardown.wait()
# if that was last channel reset connect event
if len(self) == 0:
self._connect_event = trio.Event()
async def wait_for_channel(self):
'''
Wait until at least one channel added
'''
if self.closed:
raise trio.ClosedResourceError
await self._connect_event.wait()
self._connect_event = trio.Event()
def __len__(self) -> int:
return len(self._channels)
def __getitem__(self, name: str):
maybe_entry = self._find_channel(name)
if maybe_entry:
_, info = maybe_entry
return info
raise KeyError(f'Channel {name} not found!')
def open(self):
self._is_closed = False
async def close(self) -> None:
if self.closed:
log.warning('tried to close ChannelManager but its already closed...')
return
for info in self._channels:
if info.channel.closed:
continue
await info.channel.aclose()
await self.remove_channel(info.token.shm_name)
self._is_closed = True
'''
Ring buffer publisher & subscribe pattern mediated by `ringd` actor.
'''
class RingBufferPublisher(trio.abc.SendChannel[PayloadT]):
'''
Use ChannelManager to create a multi ringbuf round robin sender that can
dynamically add or remove more outputs.
Don't instantiate directly, use `open_ringbuf_publisher` acm to manage its
lifecycle.
'''
def __init__(
self,
n: trio.Nursery,
# amount of msgs to each ring before switching turns
msgs_per_turn: int = 1,
# global batch size for all channels
batch_size: int = 1,
encoder: Encoder | None = None
):
self._batch_size: int = batch_size
self.msgs_per_turn = msgs_per_turn
self._enc = encoder
# helper to manage acms + long running tasks
self._chanmngr = ChannelManager[RingBufferSendChannel[PayloadT]](
n,
self._open_channel,
self._channel_task
)
# ensure no concurrent `.send()` calls
self._send_lock = trio.StrictFIFOLock()
# index of channel to be used for next send
self._next_turn: int = 0
# amount of messages sent this turn
self._turn_msgs: int = 0
# have we closed this publisher?
# set to `False` on `.__aenter__()`
self._is_closed: bool = True
@property
def closed(self) -> bool:
return self._is_closed
@property
def batch_size(self) -> int:
return self._batch_size
@batch_size.setter
def batch_size(self, value: int) -> None:
for info in self.channels:
info.channel.batch_size = value
@property
def channels(self) -> list[ChannelInfo]:
return self._chanmngr.channels
def _get_next_turn(self) -> int:
'''
Maybe switch turn and reset self._turn_msgs or just increment it.
Return current turn
'''
if self._turn_msgs == self.msgs_per_turn:
self._turn_msgs = 0
self._next_turn += 1
if self._next_turn >= len(self.channels):
self._next_turn = 0
else:
self._turn_msgs += 1
return self._next_turn
def get_channel(self, name: str) -> ChannelInfo:
'''
Get underlying ChannelInfo from name
'''
return self._chanmngr[name]
async def add_channel(
self,
token: RBToken,
):
await self._chanmngr.add_channel(token)
async def remove_channel(self, name: str):
await self._chanmngr.remove_channel(name)
@acm
async def _open_channel(
self,
token: RBToken
) -> AsyncContextManager[RingBufferSendChannel[PayloadT]]:
async with attach_to_ringbuf_sender(
token,
batch_size=self._batch_size,
encoder=self._enc
) as ring:
yield ring
async def _channel_task(self, info: ChannelInfo) -> None:
'''
Wait forever until channel cancellation
'''
await trio.sleep_forever()
async def send(self, msg: bytes):
'''
If no output channels connected, wait until one, then fetch the next
channel based on turn.
Needs to acquire `self._send_lock` to ensure no concurrent calls.
'''
if self.closed:
raise trio.ClosedResourceError
if self._send_lock.locked():
raise trio.BusyResourceError
async with self._send_lock:
# wait at least one decoder connected
if len(self.channels) == 0:
await self._chanmngr.wait_for_channel()
turn = self._get_next_turn()
info = self.channels[turn]
await info.channel.send(msg)
async def broadcast(self, msg: PayloadT):
'''
Send a msg to all channels, if no channels connected, does nothing.
'''
if self.closed:
raise trio.ClosedResourceError
for info in self.channels:
await info.channel.send(msg)
async def flush(self, new_batch_size: int | None = None):
for info in self.channels:
try:
await info.channel.flush(new_batch_size=new_batch_size)
except trio.ClosedResourceError:
...
async def __aenter__(self):
self._is_closed = False
self._chanmngr.open()
return self
async def aclose(self) -> None:
if self.closed:
log.warning('tried to close RingBufferPublisher but its already closed...')
return
await self._chanmngr.close()
self._is_closed = True
class RingBufferSubscriber(trio.abc.ReceiveChannel[PayloadT]):
'''
Use ChannelManager to create a multi ringbuf receiver that can
dynamically add or remove more inputs and combine all into a single output.
In order for `self.receive` messages to be returned in order, publisher
will send all payloads as `OrderedPayload` msgpack encoded msgs, this
allows our channel handler tasks to just stash the out of order payloads
inside `self._pending_payloads` and if a in order payload is available
signal through `self._new_payload_event`.
On `self.receive` we wait until at least one channel is connected, then if
an in order payload is pending, we pop and return it, in case no in order
payload is available wait until next `self._new_payload_event.set()`.
'''
def __init__(
self,
n: trio.Nursery,
decoder: Decoder | None = None
):
self._dec = decoder
self._chanmngr = ChannelManager[RingBufferReceiveChannel[PayloadT]](
n,
self._open_channel,
self._channel_task
)
self._schan, self._rchan = trio.open_memory_channel(0)
self._is_closed: bool = True
self._receive_lock = trio.StrictFIFOLock()
@property
def closed(self) -> bool:
return self._is_closed
@property
def channels(self) -> list[ChannelInfo]:
return self._chanmngr.channels
def get_channel(self, name: str):
return self._chanmngr[name]
async def add_channel(
self,
token: RBToken
):
await self._chanmngr.add_channel(token)
async def remove_channel(self, name: str):
await self._chanmngr.remove_channel(name)
@acm
async def _open_channel(
self,
token: RBToken
) -> AsyncContextManager[RingBufferSendChannel]:
async with attach_to_ringbuf_receiver(
token,
decoder=self._dec
) as ring:
yield ring
async def _channel_task(self, info: ChannelInfo) -> None:
'''
Iterate over receive channel messages, decode them as `OrderedPayload`s
and stash them in `self._pending_payloads`, in case we can pop next in
order payload, signal through setting `self._new_payload_event`.
'''
while True:
try:
msg = await info.channel.receive()
await self._schan.send(msg)
except tractor.linux.eventfd.EFDReadCancelled as e:
# when channel gets removed while we are doing a receive
log.exception(e)
break
except trio.EndOfChannel:
break
except trio.ClosedResourceError:
break
async def receive(self) -> PayloadT:
'''
Receive next in order msg
'''
if self.closed:
raise trio.ClosedResourceError
if self._receive_lock.locked():
raise trio.BusyResourceError
async with self._receive_lock:
return await self._rchan.receive()
async def __aenter__(self):
self._is_closed = False
self._chanmngr.open()
return self
async def aclose(self) -> None:
if self.closed:
return
await self._chanmngr.close()
await self._schan.aclose()
await self._rchan.aclose()
self._is_closed = True
'''
Actor module for managing publisher & subscriber channels remotely through
`tractor.context` rpc
'''
@dataclass
class PublisherEntry:
publisher: RingBufferPublisher | None = None
is_set: trio.Event = trio.Event()
_publishers: dict[str, PublisherEntry] = {}
def maybe_init_publisher(topic: str) -> PublisherEntry:
entry = _publishers.get(topic, None)
if not entry:
entry = PublisherEntry()
_publishers[topic] = entry
return entry
def set_publisher(topic: str, pub: RingBufferPublisher):
global _publishers
entry = _publishers.get(topic, None)
if not entry:
entry = maybe_init_publisher(topic)
if entry.publisher:
raise RuntimeError(
f'publisher for topic {topic} already set on {tractor.current_actor()}'
)
entry.publisher = pub
entry.is_set.set()
def get_publisher(topic: str = 'default') -> RingBufferPublisher:
entry = _publishers.get(topic, None)
if not entry or not entry.publisher:
raise RuntimeError(
f'{tractor.current_actor()} tried to get publisher'
'but it\'s not set'
)
return entry.publisher
async def wait_publisher(topic: str) -> RingBufferPublisher:
entry = maybe_init_publisher(topic)
await entry.is_set.wait()
return entry.publisher
@tractor.context
async def _add_pub_channel(
ctx: tractor.Context,
topic: str,
token: RBToken
):
publisher = await wait_publisher(topic)
await publisher.add_channel(token)
@tractor.context
async def _remove_pub_channel(
ctx: tractor.Context,
topic: str,
ring_name: str
):
publisher = await wait_publisher(topic)
maybe_token = fdshare.maybe_get_fds(ring_name)
if maybe_token:
await publisher.remove_channel(ring_name)
@acm
async def open_pub_channel_at(
actor_name: str,
token: RBToken,
topic: str = 'default',
):
async with tractor.find_actor(actor_name) as portal:
await portal.run(_add_pub_channel, topic=topic, token=token)
try:
yield
except trio.Cancelled:
log.warning(
'open_pub_channel_at got cancelled!\n'
f'\tactor_name = {actor_name}\n'
f'\ttoken = {token}\n'
)
raise
await portal.run(_remove_pub_channel, topic=topic, ring_name=token.shm_name)
@dataclass
class SubscriberEntry:
subscriber: RingBufferSubscriber | None = None
is_set: trio.Event = trio.Event()
_subscribers: dict[str, SubscriberEntry] = {}
def maybe_init_subscriber(topic: str) -> SubscriberEntry:
entry = _subscribers.get(topic, None)
if not entry:
entry = SubscriberEntry()
_subscribers[topic] = entry
return entry
def set_subscriber(topic: str, sub: RingBufferSubscriber):
global _subscribers
entry = _subscribers.get(topic, None)
if not entry:
entry = maybe_init_subscriber(topic)
if entry.subscriber:
raise RuntimeError(
f'subscriber for topic {topic} already set on {tractor.current_actor()}'
)
entry.subscriber = sub
entry.is_set.set()
def get_subscriber(topic: str = 'default') -> RingBufferSubscriber:
entry = _subscribers.get(topic, None)
if not entry or not entry.subscriber:
raise RuntimeError(
f'{tractor.current_actor()} tried to get subscriber'
'but it\'s not set'
)
return entry.subscriber
async def wait_subscriber(topic: str) -> RingBufferSubscriber:
entry = maybe_init_subscriber(topic)
await entry.is_set.wait()
return entry.subscriber
@tractor.context
async def _add_sub_channel(
ctx: tractor.Context,
topic: str,
token: RBToken
):
subscriber = await wait_subscriber(topic)
await subscriber.add_channel(token)
@tractor.context
async def _remove_sub_channel(
ctx: tractor.Context,
topic: str,
ring_name: str
):
subscriber = await wait_subscriber(topic)
maybe_token = fdshare.maybe_get_fds(ring_name)
if maybe_token:
await subscriber.remove_channel(ring_name)
@acm
async def open_sub_channel_at(
actor_name: str,
token: RBToken,
topic: str = 'default',
):
async with tractor.find_actor(actor_name) as portal:
await portal.run(_add_sub_channel, topic=topic, token=token)
try:
yield
except trio.Cancelled:
log.warning(
'open_sub_channel_at got cancelled!\n'
f'\tactor_name = {actor_name}\n'
f'\ttoken = {token}\n'
)
raise
await portal.run(_remove_sub_channel, topic=topic, ring_name=token.shm_name)
'''
High level helpers to open publisher & subscriber
'''
@acm
async def open_ringbuf_publisher(
# name to distinguish this publisher
topic: str = 'default',
# global batch size for channels
batch_size: int = 1,
# messages before changing output channel
msgs_per_turn: int = 1,
encoder: Encoder | None = None,
# ensure subscriber receives in same order publisher sent
# causes it to use wrapped payloads which contain the og
# index
guarantee_order: bool = False,
# on creation, set the `_publisher` global in order to use the provided
# tractor.context & helper utils for adding and removing new channels from
# remote actors
set_module_var: bool = True
) -> AsyncContextManager[RingBufferPublisher]:
'''
Open a new ringbuf publisher
'''
async with (
trio.open_nursery(strict_exception_groups=False) as n,
RingBufferPublisher(
n,
batch_size=batch_size,
encoder=encoder,
) as publisher
):
if guarantee_order:
order_send_channel(publisher)
if set_module_var:
set_publisher(topic, publisher)
yield publisher
n.cancel_scope.cancel()
@acm
async def open_ringbuf_subscriber(
# name to distinguish this subscriber
topic: str = 'default',
decoder: Decoder | None = None,
# expect indexed payloads and unwrap them in order
guarantee_order: bool = False,
# on creation, set the `_subscriber` global in order to use the provided
# tractor.context & helper utils for adding and removing new channels from
# remote actors
set_module_var: bool = True
) -> AsyncContextManager[RingBufferPublisher]:
'''
Open a new ringbuf subscriber
'''
async with (
trio.open_nursery(strict_exception_groups=False) as n,
RingBufferSubscriber(n, decoder=decoder) as subscriber
):
# maybe monkey patch `.receive` to use indexed payloads
if guarantee_order:
order_receive_channel(subscriber)
# maybe set global module var for remote actor channel updates
if set_module_var:
set_subscriber(topic, subscriber)
yield subscriber
n.cancel_scope.cancel()

File diff suppressed because it is too large Load Diff

View File

@ -50,7 +50,10 @@ if _USE_POSIX:
try: try:
import numpy as np import numpy as np
from numpy.lib import recfunctions as rfn from numpy.lib import recfunctions as rfn
import nptyping # TODO ruff complains with,
# warning| F401: `nptyping` imported but unused; consider using
# `importlib.util.find_spec` to test for availability
import nptyping # noqa
except ImportError: except ImportError:
pass pass

View File

@ -18,389 +18,195 @@ TCP implementation of tractor.ipc._transport.MsgTransport protocol
''' '''
from __future__ import annotations from __future__ import annotations
from collections.abc import (
AsyncGenerator,
AsyncIterator,
)
import struct
from typing import ( from typing import (
Any, ClassVar,
Callable,
Type,
) )
# from contextlib import (
# asynccontextmanager as acm,
# )
import msgspec import msgspec
from tricycle import BufferedReceiveStream
import trio import trio
from trio import (
SocketListener,
open_tcp_listeners,
)
from tractor.msg import MsgCodec
from tractor.log import get_logger from tractor.log import get_logger
from tractor._exceptions import ( from tractor.ipc._transport import (
MsgTypeError, MsgTransport,
TransportClosed, MsgpackTransport,
_mk_send_mte,
_mk_recv_mte,
) )
from tractor.msg import (
_ctxvar_MsgCodec,
# _codec, XXX see `self._codec` sanity/debug checks
MsgCodec,
types as msgtypes,
pretty_struct,
)
from tractor.ipc import MsgTransport
log = get_logger(__name__) log = get_logger(__name__)
def get_stream_addrs( class TCPAddress(
stream: trio.SocketStream msgspec.Struct,
) -> tuple[ frozen=True,
tuple[str, int], # local ):
tuple[str, int], # remote _host: str
]: _port: int
proto_key: ClassVar[str] = 'tcp'
unwrapped_type: ClassVar[type] = tuple[str, int]
def_bindspace: ClassVar[str] = '127.0.0.1'
@property
def is_valid(self) -> bool:
return self._port != 0
@property
def bindspace(self) -> str:
return self._host
@property
def domain(self) -> str:
return self._host
@classmethod
def from_addr(
cls,
addr: tuple[str, int]
) -> TCPAddress:
match addr:
case (str(), int()):
return TCPAddress(addr[0], addr[1])
case _:
raise ValueError(
f'Invalid unwrapped address for {cls}\n'
f'{addr}\n'
)
def unwrap(self) -> tuple[str, int]:
return (
self._host,
self._port,
)
@classmethod
def get_random(
cls,
bindspace: str = def_bindspace,
) -> TCPAddress:
return TCPAddress(bindspace, 0)
@classmethod
def get_root(cls) -> TCPAddress:
return TCPAddress(
'127.0.0.1',
1616,
)
def __repr__(self) -> str:
return (
f'{type(self).__name__}[{self.unwrap()}]'
)
@classmethod
def get_transport(
cls,
codec: str = 'msgpack',
) -> MsgTransport:
match codec:
case 'msgspack':
return MsgpackTCPStream
case _:
raise ValueError(
f'No IPC transport with {codec!r} supported !'
)
async def start_listener(
addr: TCPAddress,
**kwargs,
) -> SocketListener:
''' '''
Return the `trio` streaming transport prot's socket-addrs for Start a TCP socket listener on the given `TCPAddress`.
both the local and remote sides as a pair.
''' '''
# rn, should both be IP sockets # ?TODO, maybe we should just change the lower-level call this is
lsockname = stream.socket.getsockname() # using internall per-listener?
rsockname = stream.socket.getpeername() listeners: list[SocketListener] = await open_tcp_listeners(
return ( host=addr._host,
tuple(lsockname[:2]), port=addr._port,
tuple(rsockname[:2]), **kwargs
) )
# NOTE, for now we don't expect non-singleton-resolving
# domain-addresses/multi-homed-hosts.
# (though it is supported by `open_tcp_listeners()`)
assert len(listeners) == 1
listener = listeners[0]
host, port = listener.socket.getsockname()[:2]
return listener
# TODO: typing oddity.. not sure why we have to inherit here, but it # TODO: typing oddity.. not sure why we have to inherit here, but it
# seems to be an issue with `get_msg_transport()` returning # seems to be an issue with `get_msg_transport()` returning
# a `Type[Protocol]`; probably should make a `mypy` issue? # a `Type[Protocol]`; probably should make a `mypy` issue?
class MsgpackTCPStream(MsgTransport): class MsgpackTCPStream(MsgpackTransport):
''' '''
A ``trio.SocketStream`` delivering ``msgpack`` formatted data A ``trio.SocketStream`` delivering ``msgpack`` formatted data
using the ``msgspec`` codec lib. using the ``msgspec`` codec lib.
''' '''
address_type = TCPAddress
layer_key: int = 4 layer_key: int = 4
name_key: str = 'tcp'
# TODO: better naming for this?
# -[ ] check how libp2p does naming for such things?
codec_key: str = 'msgpack'
def __init__(
self,
stream: trio.SocketStream,
prefix_size: int = 4,
# XXX optionally provided codec pair for `msgspec`:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
#
# TODO: define this as a `Codec` struct which can be
# overriden dynamically by the application/runtime?
codec: tuple[
Callable[[Any], Any]|None, # coder
Callable[[type, Any], Any]|None, # decoder
]|None = None,
) -> None:
self.stream = stream
assert self.stream.socket
# should both be IP sockets
self._laddr, self._raddr = get_stream_addrs(stream)
# create read loop instance
self._aiter_pkts = self._iter_packets()
self._send_lock = trio.StrictFIFOLock()
# public i guess?
self.drained: list[dict] = []
self.recv_stream = BufferedReceiveStream(
transport_stream=stream
)
self.prefix_size = prefix_size
# allow for custom IPC msg interchange format
# dynamic override Bo
self._task = trio.lowlevel.current_task()
# XXX for ctxvar debug only!
# self._codec: MsgCodec = (
# codec
# or
# _codec._ctxvar_MsgCodec.get()
# )
async def _iter_packets(self) -> AsyncGenerator[dict, None]:
'''
Yield `bytes`-blob decoded packets from the underlying TCP
stream using the current task's `MsgCodec`.
This is a streaming routine implemented as an async generator
func (which was the original design, but could be changed?)
and is allocated by a `.__call__()` inside `.__init__()` where
it is assigned to the `._aiter_pkts` attr.
'''
decodes_failed: int = 0
while True:
try:
header: bytes = await self.recv_stream.receive_exactly(4)
except (
ValueError,
ConnectionResetError,
# not sure entirely why we need this but without it we
# seem to be getting racy failures here on
# arbiter/registry name subs..
trio.BrokenResourceError,
) as trans_err:
loglevel = 'transport'
match trans_err:
# case (
# ConnectionResetError()
# ):
# loglevel = 'transport'
# peer actor (graceful??) TCP EOF but `tricycle`
# seems to raise a 0-bytes-read?
case ValueError() if (
'unclean EOF' in trans_err.args[0]
):
pass
# peer actor (task) prolly shutdown quickly due
# to cancellation
case trio.BrokenResourceError() if (
'Connection reset by peer' in trans_err.args[0]
):
pass
# unless the disconnect condition falls under "a
# normal operation breakage" we usualy console warn
# about it.
case _:
loglevel: str = 'warning'
raise TransportClosed(
message=(
f'IPC transport already closed by peer\n'
f'x)> {type(trans_err)}\n'
f' |_{self}\n'
),
loglevel=loglevel,
) from trans_err
# XXX definitely can happen if transport is closed
# manually by another `trio.lowlevel.Task` in the
# same actor; we use this in some simulated fault
# testing for ex, but generally should never happen
# under normal operation!
#
# NOTE: as such we always re-raise this error from the
# RPC msg loop!
except trio.ClosedResourceError as closure_err:
raise TransportClosed(
message=(
f'IPC transport already manually closed locally?\n'
f'x)> {type(closure_err)} \n'
f' |_{self}\n'
),
loglevel='error',
raise_on_report=(
closure_err.args[0] == 'another task closed this fd'
or
closure_err.args[0] in ['another task closed this fd']
),
) from closure_err
# graceful TCP EOF disconnect
if header == b'':
raise TransportClosed(
message=(
f'IPC transport already gracefully closed\n'
f')>\n'
f'|_{self}\n'
),
loglevel='transport',
# cause=??? # handy or no?
)
size: int
size, = struct.unpack("<I", header)
log.transport(f'received header {size}') # type: ignore
msg_bytes: bytes = await self.recv_stream.receive_exactly(size)
log.transport(f"received {msg_bytes}") # type: ignore
try:
# NOTE: lookup the `trio.Task.context`'s var for
# the current `MsgCodec`.
codec: MsgCodec = _ctxvar_MsgCodec.get()
# XXX for ctxvar debug only!
# if self._codec.pld_spec != codec.pld_spec:
# assert (
# task := trio.lowlevel.current_task()
# ) is not self._task
# self._task = task
# self._codec = codec
# log.runtime(
# f'Using new codec in {self}.recv()\n'
# f'codec: {self._codec}\n\n'
# f'msg_bytes: {msg_bytes}\n'
# )
yield codec.decode(msg_bytes)
# XXX NOTE: since the below error derives from
# `DecodeError` we need to catch is specially
# and always raise such that spec violations
# are never allowed to be caught silently!
except msgspec.ValidationError as verr:
msgtyperr: MsgTypeError = _mk_recv_mte(
msg=msg_bytes,
codec=codec,
src_validation_error=verr,
)
# XXX deliver up to `Channel.recv()` where
# a re-raise and `Error`-pack can inject the far
# end actor `.uid`.
yield msgtyperr
except (
msgspec.DecodeError,
UnicodeDecodeError,
):
if decodes_failed < 4:
# ignore decoding errors for now and assume they have to
# do with a channel drop - hope that receiving from the
# channel will raise an expected error and bubble up.
try:
msg_str: str|bytes = msg_bytes.decode()
except UnicodeDecodeError:
msg_str = msg_bytes
log.exception(
'Failed to decode msg?\n'
f'{codec}\n\n'
'Rxed bytes from wire:\n\n'
f'{msg_str!r}\n'
)
decodes_failed += 1
else:
raise
async def send(
self,
msg: msgtypes.MsgType,
strict_types: bool = True,
hide_tb: bool = False,
) -> None:
'''
Send a msgpack encoded py-object-blob-as-msg over TCP.
If `strict_types == True` then a `MsgTypeError` will be raised on any
invalid msg type
'''
__tracebackhide__: bool = hide_tb
# XXX see `trio._sync.AsyncContextManagerMixin` for details
# on the `.acquire()`/`.release()` sequencing..
async with self._send_lock:
# NOTE: lookup the `trio.Task.context`'s var for
# the current `MsgCodec`.
codec: MsgCodec = _ctxvar_MsgCodec.get()
# XXX for ctxvar debug only!
# if self._codec.pld_spec != codec.pld_spec:
# self._codec = codec
# log.runtime(
# f'Using new codec in {self}.send()\n'
# f'codec: {self._codec}\n\n'
# f'msg: {msg}\n'
# )
if type(msg) not in msgtypes.__msg_types__:
if strict_types:
raise _mk_send_mte(
msg,
codec=codec,
)
else:
log.warning(
'Sending non-`Msg`-spec msg?\n\n'
f'{msg}\n'
)
try:
bytes_data: bytes = codec.encode(msg)
except TypeError as _err:
typerr = _err
msgtyperr: MsgTypeError = _mk_send_mte(
msg,
codec=codec,
message=(
f'IPC-msg-spec violation in\n\n'
f'{pretty_struct.Struct.pformat(msg)}'
),
src_type_error=typerr,
)
raise msgtyperr from typerr
# supposedly the fastest says,
# https://stackoverflow.com/a/54027962
size: bytes = struct.pack("<I", len(bytes_data))
return await self.stream.send_all(size + bytes_data)
# ?TODO? does it help ever to dynamically show this
# frame?
# try:
# <the-above_code>
# except BaseException as _err:
# err = _err
# if not isinstance(err, MsgTypeError):
# __tracebackhide__: bool = False
# raise
@property @property
def laddr(self) -> tuple[str, int]: def maddr(self) -> str:
return self._laddr host, port = self.raddr.unwrap()
return (
# TODO, use `ipaddress` from stdlib to handle
# first detecting which of `ipv4/6` before
# choosing the routing prefix part.
f'/ipv4/{host}'
@property f'/{self.address_type.proto_key}/{port}'
def raddr(self) -> tuple[str, int]: # f'/{self.chan.uid[0]}'
return self._raddr # f'/{self.cid}'
async def recv(self) -> Any: # f'/cid={cid_head}..{cid_tail}'
return await self._aiter_pkts.asend(None) # TODO: ? not use this ^ right ?
)
async def drain(self) -> AsyncIterator[dict]:
'''
Drain the stream's remaining messages sent from
the far end until the connection is closed by
the peer.
'''
try:
async for msg in self._iter_packets():
self.drained.append(msg)
except TransportClosed:
for msg in self.drained:
yield msg
def __aiter__(self):
return self._aiter_pkts
def connected(self) -> bool: def connected(self) -> bool:
return self.stream.socket.fileno() != -1 return self.stream.socket.fileno() != -1
@classmethod
async def connect_to(
cls,
destaddr: TCPAddress,
prefix_size: int = 4,
codec: MsgCodec|None = None,
**kwargs
) -> MsgpackTCPStream:
stream = await trio.open_tcp_stream(
*destaddr.unwrap(),
**kwargs
)
return MsgpackTCPStream(
stream,
prefix_size=prefix_size,
codec=codec
)
@classmethod
def get_stream_addrs(
cls,
stream: trio.SocketStream
) -> tuple[
TCPAddress,
TCPAddress,
]:
# TODO, what types are these?
lsockname = stream.socket.getsockname()
l_sockaddr: tuple[str, int] = tuple(lsockname[:2])
rsockname = stream.socket.getpeername()
r_sockaddr: tuple[str, int] = tuple(rsockname[:2])
return (
TCPAddress.from_addr(l_sockaddr),
TCPAddress.from_addr(r_sockaddr),
)

View File

@ -14,38 +14,75 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
typing.Protocol based generic msg API, implement this class to add backends for typing.Protocol based generic msg API, implement this class to add
tractor.ipc.Channel backends for tractor.ipc.Channel
''' '''
import trio from __future__ import annotations
from typing import ( from typing import (
runtime_checkable, runtime_checkable,
Type,
Protocol, Protocol,
TypeVar, # TypeVar,
ClassVar,
TYPE_CHECKING,
) )
from collections.abc import AsyncIterator from collections.abc import (
AsyncGenerator,
AsyncIterator,
)
import struct
import trio
import msgspec
from tricycle import BufferedReceiveStream
from tractor.log import get_logger
from tractor._exceptions import (
MsgTypeError,
TransportClosed,
_mk_send_mte,
_mk_recv_mte,
)
from tractor.msg import (
_ctxvar_MsgCodec,
# _codec, XXX see `self._codec` sanity/debug checks
MsgCodec,
MsgType,
types as msgtypes,
pretty_struct,
)
if TYPE_CHECKING:
from tractor._addr import Address
log = get_logger(__name__)
# (codec, transport)
MsgTransportKey = tuple[str, str]
# from tractor.msg.types import MsgType # from tractor.msg.types import MsgType
# ?TODO? this should be our `Union[*msgtypes.__spec__]` alias now right..? # ?TODO? this should be our `Union[*msgtypes.__spec__]` alias now right..?
# => BLEH, except can't bc prots must inherit typevar or param-spec # => BLEH, except can't bc prots must inherit typevar or param-spec
# vars.. # vars..
MsgType = TypeVar('MsgType') # MsgType = TypeVar('MsgType')
@runtime_checkable @runtime_checkable
class MsgTransport(Protocol[MsgType]): class MsgTransport(Protocol):
# #
# class MsgTransport(Protocol[MsgType]):
# ^-TODO-^ consider using a generic def and indexing with our # ^-TODO-^ consider using a generic def and indexing with our
# eventual msg definition/types? # eventual msg definition/types?
# - https://docs.python.org/3/library/typing.html#typing.Protocol # - https://docs.python.org/3/library/typing.html#typing.Protocol
stream: trio.SocketStream stream: trio.abc.Stream
drained: list[MsgType] drained: list[MsgType]
def __init__(self, stream: trio.SocketStream) -> None: address_type: ClassVar[Type[Address]]
... codec_key: ClassVar[str]
# XXX: should this instead be called `.sendall()`? # XXX: should this instead be called `.sendall()`?
async def send(self, msg: MsgType) -> None: async def send(self, msg: MsgType) -> None:
@ -65,10 +102,413 @@ class MsgTransport(Protocol[MsgType]):
def drain(self) -> AsyncIterator[dict]: def drain(self) -> AsyncIterator[dict]:
... ...
@classmethod
def key(cls) -> MsgTransportKey:
return (
cls.codec_key,
cls.address_type.proto_key,
)
@property @property
def laddr(self) -> tuple[str, int]: def laddr(self) -> Address:
... ...
@property @property
def raddr(self) -> tuple[str, int]: def raddr(self) -> Address:
... ...
@property
def maddr(self) -> str:
...
@classmethod
async def connect_to(
cls,
addr: Address,
**kwargs
) -> MsgTransport:
...
@classmethod
def get_stream_addrs(
cls,
stream: trio.abc.Stream
) -> tuple[
Address, # local
Address # remote
]:
'''
Return the transport protocol's address pair for the local
and remote-peer side.
'''
...
# TODO, such that all `.raddr`s for each `SocketStream` are
# delivered?
# -[ ] move `.open_listener()` here and internally track the
# listener set, per address?
# def get_peers(
# self,
# ) -> list[Address]:
# ...
class MsgpackTransport(MsgTransport):
# TODO: better naming for this?
# -[ ] check how libp2p does naming for such things?
codec_key: str = 'msgpack'
def __init__(
self,
stream: trio.abc.Stream,
prefix_size: int = 4,
# XXX optionally provided codec pair for `msgspec`:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types
#
# TODO: define this as a `Codec` struct which can be
# overriden dynamically by the application/runtime?
codec: MsgCodec = None,
) -> None:
self.stream = stream
(
self._laddr,
self._raddr,
) = self.get_stream_addrs(stream)
# create read loop instance
self._aiter_pkts = self._iter_packets()
self._send_lock = trio.StrictFIFOLock()
# public i guess?
self.drained: list[dict] = []
self.recv_stream = BufferedReceiveStream(
transport_stream=stream
)
self.prefix_size = prefix_size
# allow for custom IPC msg interchange format
# dynamic override Bo
self._task = trio.lowlevel.current_task()
# XXX for ctxvar debug only!
# self._codec: MsgCodec = (
# codec
# or
# _codec._ctxvar_MsgCodec.get()
# )
async def _iter_packets(self) -> AsyncGenerator[dict, None]:
'''
Yield `bytes`-blob decoded packets from the underlying TCP
stream using the current task's `MsgCodec`.
This is a streaming routine implemented as an async generator
func (which was the original design, but could be changed?)
and is allocated by a `.__call__()` inside `.__init__()` where
it is assigned to the `._aiter_pkts` attr.
'''
decodes_failed: int = 0
tpt_name: str = f'{type(self).__name__!r}'
while True:
try:
header: bytes = await self.recv_stream.receive_exactly(4)
except (
ValueError,
ConnectionResetError,
# not sure entirely why we need this but without it we
# seem to be getting racy failures here on
# arbiter/registry name subs..
trio.BrokenResourceError,
) as trans_err:
loglevel = 'transport'
match trans_err:
# case (
# ConnectionResetError()
# ):
# loglevel = 'transport'
# peer actor (graceful??) TCP EOF but `tricycle`
# seems to raise a 0-bytes-read?
case ValueError() if (
'unclean EOF' in trans_err.args[0]
):
pass
# peer actor (task) prolly shutdown quickly due
# to cancellation
case trio.BrokenResourceError() if (
'Connection reset by peer' in trans_err.args[0]
):
pass
# unless the disconnect condition falls under "a
# normal operation breakage" we usualy console warn
# about it.
case _:
loglevel: str = 'warning'
raise TransportClosed(
message=(
f'{tpt_name} already closed by peer\n'
),
src_exc=trans_err,
loglevel=loglevel,
) from trans_err
# XXX definitely can happen if transport is closed
# manually by another `trio.lowlevel.Task` in the
# same actor; we use this in some simulated fault
# testing for ex, but generally should never happen
# under normal operation!
#
# NOTE: as such we always re-raise this error from the
# RPC msg loop!
except trio.ClosedResourceError as cre:
closure_err = cre
raise TransportClosed(
message=(
f'{tpt_name} was already closed locally ?\n'
),
src_exc=closure_err,
loglevel='error',
raise_on_report=(
'another task closed this fd' in closure_err.args
),
) from closure_err
# graceful TCP EOF disconnect
if header == b'':
raise TransportClosed(
message=(
f'{tpt_name} already gracefully closed\n'
),
loglevel='transport',
)
size: int
size, = struct.unpack("<I", header)
log.transport(f'received header {size}') # type: ignore
msg_bytes: bytes = await self.recv_stream.receive_exactly(size)
log.transport(f"received {msg_bytes}") # type: ignore
try:
# NOTE: lookup the `trio.Task.context`'s var for
# the current `MsgCodec`.
codec: MsgCodec = _ctxvar_MsgCodec.get()
# XXX for ctxvar debug only!
# if self._codec.pld_spec != codec.pld_spec:
# assert (
# task := trio.lowlevel.current_task()
# ) is not self._task
# self._task = task
# self._codec = codec
# log.runtime(
# f'Using new codec in {self}.recv()\n'
# f'codec: {self._codec}\n\n'
# f'msg_bytes: {msg_bytes}\n'
# )
yield codec.decode(msg_bytes)
# XXX NOTE: since the below error derives from
# `DecodeError` we need to catch is specially
# and always raise such that spec violations
# are never allowed to be caught silently!
except msgspec.ValidationError as verr:
msgtyperr: MsgTypeError = _mk_recv_mte(
msg=msg_bytes,
codec=codec,
src_validation_error=verr,
)
# XXX deliver up to `Channel.recv()` where
# a re-raise and `Error`-pack can inject the far
# end actor `.uid`.
yield msgtyperr
except (
msgspec.DecodeError,
UnicodeDecodeError,
):
if decodes_failed < 4:
# ignore decoding errors for now and assume they have to
# do with a channel drop - hope that receiving from the
# channel will raise an expected error and bubble up.
try:
msg_str: str|bytes = msg_bytes.decode()
except UnicodeDecodeError:
msg_str = msg_bytes
log.exception(
'Failed to decode msg?\n'
f'{codec}\n\n'
'Rxed bytes from wire:\n\n'
f'{msg_str!r}\n'
)
decodes_failed += 1
else:
raise
async def send(
self,
msg: msgtypes.MsgType,
strict_types: bool = True,
hide_tb: bool = True,
) -> None:
'''
Send a msgpack encoded py-object-blob-as-msg over TCP.
If `strict_types == True` then a `MsgTypeError` will be raised on any
invalid msg type
'''
__tracebackhide__: bool = hide_tb
# XXX see `trio._sync.AsyncContextManagerMixin` for details
# on the `.acquire()`/`.release()` sequencing..
async with self._send_lock:
# NOTE: lookup the `trio.Task.context`'s var for
# the current `MsgCodec`.
codec: MsgCodec = _ctxvar_MsgCodec.get()
# XXX for ctxvar debug only!
# if self._codec.pld_spec != codec.pld_spec:
# self._codec = codec
# log.runtime(
# f'Using new codec in {self}.send()\n'
# f'codec: {self._codec}\n\n'
# f'msg: {msg}\n'
# )
if type(msg) not in msgtypes.__msg_types__:
if strict_types:
raise _mk_send_mte(
msg,
codec=codec,
)
else:
log.warning(
'Sending non-`Msg`-spec msg?\n\n'
f'{msg}\n'
)
try:
bytes_data: bytes = codec.encode(msg)
except TypeError as _err:
typerr = _err
msgtyperr: MsgTypeError = _mk_send_mte(
msg,
codec=codec,
message=(
f'IPC-msg-spec violation in\n\n'
f'{pretty_struct.Struct.pformat(msg)}'
),
src_type_error=typerr,
)
raise msgtyperr from typerr
# supposedly the fastest says,
# https://stackoverflow.com/a/54027962
size: bytes = struct.pack("<I", len(bytes_data))
try:
return await self.stream.send_all(size + bytes_data)
except (
trio.BrokenResourceError,
) as bre:
trans_err = bre
tpt_name: str = f'{type(self).__name__!r}'
match trans_err:
case trio.BrokenResourceError() if (
'[Errno 32] Broken pipe' in trans_err.args[0]
# ^XXX, specifc to UDS transport and its,
# well, "speediness".. XD
# |_ likely todo with races related to how fast
# the socket is setup/torn-down on linux
# as it pertains to rando pings from the
# `.discovery` subsys and protos.
):
raise TransportClosed.from_src_exc(
message=(
f'{tpt_name} already closed by peer\n'
),
body=f'{self}\n',
src_exc=trans_err,
raise_on_report=True,
loglevel='transport',
) from bre
# unless the disconnect condition falls under "a
# normal operation breakage" we usualy console warn
# about it.
case _:
log.exception(
'{tpt_name} layer failed pre-send ??\n'
)
raise trans_err
# ?TODO? does it help ever to dynamically show this
# frame?
# try:
# <the-above_code>
# except BaseException as _err:
# err = _err
# if not isinstance(err, MsgTypeError):
# __tracebackhide__: bool = False
# raise
async def recv(self) -> msgtypes.MsgType:
return await self._aiter_pkts.asend(None)
async def drain(self) -> AsyncIterator[dict]:
'''
Drain the stream's remaining messages sent from
the far end until the connection is closed by
the peer.
'''
try:
async for msg in self._iter_packets():
self.drained.append(msg)
except TransportClosed:
for msg in self.drained:
yield msg
def __aiter__(self):
return self._aiter_pkts
@property
def laddr(self) -> Address:
return self._laddr
@property
def raddr(self) -> Address:
return self._raddr
def pformat(self) -> str:
return (
f'<{type(self).__name__}(\n'
f' |_peers: 2\n'
f' laddr: {self._laddr}\n'
f' raddr: {self._raddr}\n'
# f'\n'
f' |_task: {self._task}\n'
f')>\n'
)
__repr__ = __str__ = pformat

View File

@ -0,0 +1,123 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
IPC subsys type-lookup helpers?
'''
from typing import (
Type,
# TYPE_CHECKING,
)
import trio
import socket
from tractor.ipc._transport import (
MsgTransportKey,
MsgTransport
)
from tractor.ipc._tcp import (
TCPAddress,
MsgpackTCPStream,
)
from tractor.ipc._uds import (
UDSAddress,
MsgpackUDSStream,
)
# if TYPE_CHECKING:
# from tractor._addr import Address
Address = TCPAddress|UDSAddress
# manually updated list of all supported msg transport types
_msg_transports = [
MsgpackTCPStream,
MsgpackUDSStream
]
# convert a MsgTransportKey to the corresponding transport type
_key_to_transport: dict[
MsgTransportKey,
Type[MsgTransport],
] = {
('msgpack', 'tcp'): MsgpackTCPStream,
('msgpack', 'uds'): MsgpackUDSStream,
}
# convert an Address wrapper to its corresponding transport type
_addr_to_transport: dict[
Type[TCPAddress|UDSAddress],
Type[MsgTransport]
] = {
TCPAddress: MsgpackTCPStream,
UDSAddress: MsgpackUDSStream,
}
def transport_from_addr(
addr: Address,
codec_key: str = 'msgpack',
) -> Type[MsgTransport]:
'''
Given a destination address and a desired codec, find the
corresponding `MsgTransport` type.
'''
try:
return _addr_to_transport[type(addr)]
except KeyError:
raise NotImplementedError(
f'No known transport for address {repr(addr)}'
)
def transport_from_stream(
stream: trio.abc.Stream,
codec_key: str = 'msgpack'
) -> Type[MsgTransport]:
'''
Given an arbitrary `trio.abc.Stream` and a desired codec,
find the corresponding `MsgTransport` type.
'''
transport = None
if isinstance(stream, trio.SocketStream):
sock: socket.socket = stream.socket
match sock.family:
case socket.AF_INET | socket.AF_INET6:
transport = 'tcp'
case socket.AF_UNIX:
transport = 'uds'
case _:
raise NotImplementedError(
f'Unsupported socket family: {sock.family}'
)
if not transport:
raise NotImplementedError(
f'Could not figure out transport type for stream type {type(stream)}'
)
key = (codec_key, transport)
return _key_to_transport[key]

422
tractor/ipc/_uds.py 100644
View File

@ -0,0 +1,422 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Unix Domain Socket implementation of tractor.ipc._transport.MsgTransport protocol
'''
from __future__ import annotations
from pathlib import Path
import os
from socket import (
AF_UNIX,
SOCK_STREAM,
SO_PASSCRED,
SO_PEERCRED,
SOL_SOCKET,
)
import struct
from typing import (
TYPE_CHECKING,
ClassVar,
)
import msgspec
import trio
from trio import (
socket,
SocketListener,
)
from trio._highlevel_open_unix_stream import (
close_on_error,
has_unix,
)
from tractor.msg import MsgCodec
from tractor.log import get_logger
from tractor.ipc._transport import (
MsgpackTransport,
)
from .._state import (
get_rt_dir,
current_actor,
is_root_process,
)
if TYPE_CHECKING:
from ._runtime import Actor
log = get_logger(__name__)
def unwrap_sockpath(
sockpath: Path,
) -> tuple[Path, Path]:
return (
sockpath.parent,
sockpath.name,
)
class UDSAddress(
msgspec.Struct,
frozen=True,
):
filedir: str|Path|None
filename: str|Path
maybe_pid: int|None = None
# TODO, maybe we should use better field and value
# -[x] really this is a `.protocol_key` not a "name" of anything.
# -[ ] consider a 'unix' proto-key instead?
# -[ ] need to check what other mult-transport frameworks do
# like zmq, nng, uri-spec et al!
proto_key: ClassVar[str] = 'uds'
unwrapped_type: ClassVar[type] = tuple[str, int]
def_bindspace: ClassVar[Path] = get_rt_dir()
@property
def bindspace(self) -> Path:
'''
We replicate the "ip-set-of-hosts" part of a UDS socket as
just the sub-directory in which we allocate socket files.
'''
return (
self.filedir
or
self.def_bindspace
# or
# get_rt_dir()
)
@property
def sockpath(self) -> Path:
return self.bindspace / self.filename
@property
def is_valid(self) -> bool:
'''
We block socket files not allocated under the runtime subdir.
'''
return self.bindspace in self.sockpath.parents
@classmethod
def from_addr(
cls,
addr: (
tuple[Path|str, Path|str]|Path|str
),
) -> UDSAddress:
match addr:
case tuple()|list():
filedir = Path(addr[0])
filename = Path(addr[1])
return UDSAddress(
filedir=filedir,
filename=filename,
# maybe_pid=pid,
)
# NOTE, in case we ever decide to just `.unwrap()`
# to a `Path|str`?
case str()|Path():
sockpath: Path = Path(addr)
return UDSAddress(*unwrap_sockpath(sockpath))
case _:
# import pdbp; pdbp.set_trace()
raise TypeError(
f'Bad unwrapped-address for {cls} !\n'
f'{addr!r}\n'
)
def unwrap(self) -> tuple[str, int]:
# XXX NOTE, since this gets passed DIRECTLY to
# `.ipc._uds.open_unix_socket_w_passcred()`
return (
str(self.filedir),
str(self.filename),
)
@classmethod
def get_random(
cls,
bindspace: Path|None = None, # default netns
) -> UDSAddress:
filedir: Path = bindspace or cls.def_bindspace
pid: int = os.getpid()
actor: Actor|None = current_actor(
err_on_no_runtime=False,
)
if actor:
sockname: str = '::'.join(actor.uid) + f'@{pid}'
else:
prefix: str = '<unknown-actor>'
if is_root_process():
prefix: str = 'root'
sockname: str = f'{prefix}@{pid}'
sockpath: Path = Path(f'{sockname}.sock')
return UDSAddress(
filedir=filedir,
filename=sockpath,
maybe_pid=pid,
)
@classmethod
def get_root(cls) -> UDSAddress:
def_uds_filename: Path = 'registry@1616.sock'
return UDSAddress(
filedir=cls.def_bindspace,
filename=def_uds_filename,
# maybe_pid=1616,
)
# ?TODO, maybe we should just our .msg.pretty_struct.Struct` for
# this instead?
# -[ ] is it too "multi-line"y tho?
# the compact tuple/.unwrapped() form is simple enough?
#
def __repr__(self) -> str:
if not (pid := self.maybe_pid):
pid: str = '<unknown-peer-pid>'
body: str = (
f'({self.filedir}, {self.filename}, {pid})'
)
return (
f'{type(self).__name__}'
f'['
f'{body}'
f']'
)
async def start_listener(
addr: UDSAddress,
**kwargs,
) -> SocketListener:
# sock = addr._sock = socket.socket(
sock = socket.socket(
socket.AF_UNIX,
socket.SOCK_STREAM
)
log.info(
f'Attempting to bind UDS socket\n'
f'>[\n'
f'|_{addr}\n'
)
bindpath: Path = addr.sockpath
try:
await sock.bind(str(bindpath))
except (
FileNotFoundError,
) as fdne:
raise ConnectionError(
f'Bad UDS socket-filepath-as-address ??\n'
f'{addr}\n'
f' |_sockpath: {addr.sockpath}\n'
) from fdne
sock.listen(1)
log.info(
f'Listening on UDS socket\n'
f'[>\n'
f' |_{addr}\n'
)
return SocketListener(sock)
def close_listener(
addr: UDSAddress,
lstnr: SocketListener,
) -> None:
'''
Close and remove the listening unix socket's path.
'''
lstnr.socket.close()
os.unlink(addr.sockpath)
async def open_unix_socket_w_passcred(
filename: str|bytes|os.PathLike[str]|os.PathLike[bytes],
) -> trio.SocketStream:
'''
Literally the exact same as `trio.open_unix_socket()` except we set the additiona
`socket.SO_PASSCRED` option to ensure the server side (the process calling `accept()`)
can extract the connecting peer's credentials, namely OS specific process
related IDs.
See this SO for "why" the extra opts,
- https://stackoverflow.com/a/7982749
'''
if not has_unix:
raise RuntimeError("Unix sockets are not supported on this platform")
# much more simplified logic vs tcp sockets - one socket type and only one
# possible location to connect to
sock = trio.socket.socket(AF_UNIX, SOCK_STREAM)
sock.setsockopt(SOL_SOCKET, SO_PASSCRED, 1)
with close_on_error(sock):
await sock.connect(os.fspath(filename))
return trio.SocketStream(sock)
def get_peer_info(sock: trio.socket.socket) -> tuple[
int, # pid
int, # uid
int, # guid
]:
'''
Deliver the connecting peer's "credentials"-info as defined in
a very Linux specific way..
For more deats see,
- `man accept`,
- `man unix`,
this great online guide to all things sockets,
- https://beej.us/guide/bgnet/html/split-wide/man-pages.html#setsockoptman
AND this **wonderful SO answer**
- https://stackoverflow.com/a/7982749
'''
creds: bytes = sock.getsockopt(
SOL_SOCKET,
SO_PEERCRED,
struct.calcsize('3i')
)
# i.e a tuple of the fields,
# pid: int, "process"
# uid: int, "user"
# gid: int, "group"
return struct.unpack('3i', creds)
class MsgpackUDSStream(MsgpackTransport):
'''
A `trio.SocketStream` around a Unix-Domain-Socket transport
delivering `msgpack` encoded msgs using the `msgspec` codec lib.
'''
address_type = UDSAddress
layer_key: int = 4
@property
def maddr(self) -> str:
if not self.raddr:
return '<unknown-peer>'
filepath: Path = Path(self.raddr.unwrap()[0])
return (
f'/{self.address_type.proto_key}/{filepath}'
# f'/{self.chan.uid[0]}'
# f'/{self.cid}'
# f'/cid={cid_head}..{cid_tail}'
# TODO: ? not use this ^ right ?
)
def connected(self) -> bool:
return self.stream.socket.fileno() != -1
@classmethod
async def connect_to(
cls,
addr: UDSAddress,
prefix_size: int = 4,
codec: MsgCodec|None = None,
**kwargs
) -> MsgpackUDSStream:
sockpath: Path = addr.sockpath
#
# ^XXX NOTE, we don't provide any out-of-band `.pid` info
# (like, over the socket as extra msgs) since the (augmented)
# `.setsockopt()` call tells the OS provide it; the client
# pid can then be read on server/listen() side via
# `get_peer_info()` above.
try:
stream = await open_unix_socket_w_passcred(
str(sockpath),
**kwargs
)
except (
FileNotFoundError,
) as fdne:
raise ConnectionError(
f'Bad UDS socket-filepath-as-address ??\n'
f'{addr}\n'
f' |_sockpath: {sockpath}\n'
) from fdne
stream = MsgpackUDSStream(
stream,
prefix_size=prefix_size,
codec=codec
)
stream._raddr = addr
return stream
@classmethod
def get_stream_addrs(
cls,
stream: trio.SocketStream
) -> tuple[
Path,
int,
]:
sock: trio.socket.socket = stream.socket
# NOTE XXX, it's unclear why one or the other ends up being
# `bytes` versus the socket-file-path, i presume it's
# something to do with who is the server (called `.listen()`)?
# maybe could be better implemented using another info-query
# on the socket like,
# https://beej.us/guide/bgnet/html/split-wide/system-calls-or-bust.html#gethostnamewho-am-i
sockname: str|bytes = sock.getsockname()
# https://beej.us/guide/bgnet/html/split-wide/system-calls-or-bust.html#getpeernamewho-are-you
peername: str|bytes = sock.getpeername()
match (peername, sockname):
case (str(), bytes()):
sock_path: Path = Path(peername)
case (bytes(), str()):
sock_path: Path = Path(sockname)
(
peer_pid,
_,
_,
) = get_peer_info(sock)
filedir, filename = unwrap_sockpath(sock_path)
laddr = UDSAddress(
filedir=filedir,
filename=filename,
maybe_pid=os.getpid(),
)
raddr = UDSAddress(
filedir=filedir,
filename=filename,
maybe_pid=peer_pid
)
return (laddr, raddr)

View File

@ -0,0 +1,15 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.

View File

@ -0,0 +1,316 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Reimplementation of multiprocessing.reduction.sendfds & recvfds, using acms and trio.
cpython impl:
https://github.com/python/cpython/blob/275056a7fdcbe36aaac494b4183ae59943a338eb/Lib/multiprocessing/reduction.py#L138
'''
import os
import array
import tempfile
from uuid import uuid4
from pathlib import Path
from typing import AsyncContextManager
from contextlib import asynccontextmanager as acm
import trio
import tractor
from trio import socket
log = tractor.log.get_logger(__name__)
class FDSharingError(Exception):
...
@acm
async def send_fds(fds: list[int], sock_path: str) -> AsyncContextManager[None]:
'''
Async trio reimplementation of `multiprocessing.reduction.sendfds`
https://github.com/python/cpython/blob/275056a7fdcbe36aaac494b4183ae59943a338eb/Lib/multiprocessing/reduction.py#L142
It's implemented using an async context manager in order to simplyfy usage
with `tractor.context`s, we can open a context in a remote actor that uses
this acm inside of it, and uses `ctx.started()` to signal the original
caller actor to perform the `recv_fds` call.
See `tractor.ipc._ringbuf._ringd._attach_to_ring` for an example.
'''
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
await sock.bind(sock_path)
sock.listen(1)
yield # socket is setup, ready for receiver connect
# wait until receiver connects
conn, _ = await sock.accept()
# setup int array for fds
fds = array.array('i', fds)
# first byte of msg will be len of fds to send % 256, acting as a fd amount
# verification on `recv_fds` we refer to it as `check_byte`
msg = bytes([len(fds) % 256])
# send msg with custom SCM_RIGHTS type
await conn.sendmsg(
[msg],
[(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)]
)
# finally wait receiver ack
if await conn.recv(1) != b'A':
raise FDSharingError('did not receive acknowledgement of fd')
conn.close()
sock.close()
os.unlink(sock_path)
async def recv_fds(sock_path: str, amount: int) -> tuple:
'''
Async trio reimplementation of `multiprocessing.reduction.recvfds`
https://github.com/python/cpython/blob/275056a7fdcbe36aaac494b4183ae59943a338eb/Lib/multiprocessing/reduction.py#L150
It's equivalent to std just using `trio.open_unix_socket` for connecting and
changes on error handling.
See `tractor.ipc._ringbuf._ringd._attach_to_ring` for an example.
'''
stream = await trio.open_unix_socket(sock_path)
sock = stream.socket
# prepare int array for fds
a = array.array('i')
bytes_size = a.itemsize * amount
# receive 1 byte + space necesary for SCM_RIGHTS msg for {amount} fds
msg, ancdata, flags, addr = await sock.recvmsg(
1, socket.CMSG_SPACE(bytes_size)
)
# maybe failed to receive msg?
if not msg and not ancdata:
raise FDSharingError(f'Expected to receive {amount} fds from {sock_path}, but got EOF')
# send ack, std comment mentions this ack pattern was to get around an
# old macosx bug, but they are not sure if its necesary any more, in
# any case its not a bad pattern to keep
await sock.send(b'A') # Ack
# expect to receive only one `ancdata` item
if len(ancdata) != 1:
raise FDSharingError(
f'Expected to receive exactly one \"ancdata\" but got {len(ancdata)}: {ancdata}'
)
# unpack SCM_RIGHTS msg
cmsg_level, cmsg_type, cmsg_data = ancdata[0]
# check proper msg type
if cmsg_level != socket.SOL_SOCKET:
raise FDSharingError(
f'Expected CMSG level to be SOL_SOCKET({socket.SOL_SOCKET}) but got {cmsg_level}'
)
if cmsg_type != socket.SCM_RIGHTS:
raise FDSharingError(
f'Expected CMSG type to be SCM_RIGHTS({socket.SCM_RIGHTS}) but got {cmsg_type}'
)
# check proper data alignment
length = len(cmsg_data)
if length % a.itemsize != 0:
raise FDSharingError(
f'CMSG data alignment error: len of {length} is not divisible by int size {a.itemsize}'
)
# attempt to cast as int array
a.frombytes(cmsg_data)
# validate length check byte
valid_check_byte = amount % 256 # check byte acording to `recv_fds` caller
recvd_check_byte = msg[0] # actual received check byte
payload_check_byte = len(a) % 256 # check byte acording to received fd int array
if recvd_check_byte != payload_check_byte:
raise FDSharingError(
'Validation failed: received check byte '
f'({recvd_check_byte}) does not match fd int array len % 256 ({payload_check_byte})'
)
if valid_check_byte != recvd_check_byte:
raise FDSharingError(
'Validation failed: received check byte '
f'({recvd_check_byte}) does not match expected fd amount % 256 ({valid_check_byte})'
)
return tuple(a)
'''
Share FD actor module
Add "tractor.linux._fdshare" to enabled modules on actors to allow sharing of
FDs with other actors.
Use `share_fds` function to register a set of fds with a name, then other
actors can use `request_fds_from` function to retrieve the fds.
Use `unshare_fds` to disable sharing of a set of FDs.
'''
FDType = tuple[int]
_fds: dict[str, FDType] = {}
def maybe_get_fds(name: str) -> FDType | None:
'''
Get registered FDs with a given name or return None
'''
return _fds.get(name, None)
def get_fds(name: str) -> FDType:
'''
Get registered FDs with a given name or raise
'''
fds = maybe_get_fds(name)
if not fds:
raise RuntimeError(f'No FDs with name {name} found!')
return fds
def share_fds(
name: str,
fds: tuple[int],
) -> None:
'''
Register a set of fds to be shared under a given name.
'''
this_actor = tractor.current_actor()
if __name__ not in this_actor.enable_modules:
raise RuntimeError(
f'Tried to share FDs {fds} with name {name}, but '
f'module {__name__} is not enabled in actor {this_actor.name}!'
)
maybe_fds = maybe_get_fds(name)
if maybe_fds:
raise RuntimeError(f'share FDs: {maybe_fds} already tied to name {name}')
_fds[name] = fds
def unshare_fds(name: str) -> None:
'''
Unregister a set of fds to disable sharing them.
'''
get_fds(name) # raise if not exists
del _fds[name]
@tractor.context
async def _pass_fds(
ctx: tractor.Context,
name: str,
sock_path: str
) -> None:
'''
Endpoint to request a set of FDs from current actor, will use `ctx.started`
to send original FDs, then `send_fds` will block until remote side finishes
the `recv_fds` call.
'''
# get fds or raise error
fds = get_fds(name)
# start fd passing context using socket on `sock_path`
async with send_fds(fds, sock_path):
# send original fds through ctx.started
await ctx.started(fds)
async def request_fds_from(
actor_name: str,
fds_name: str
) -> FDType:
'''
Use this function to retreive shared FDs from `actor_name`.
'''
this_actor = tractor.current_actor()
# create a temporary path for the UDS sock
sock_path = str(
Path(tempfile.gettempdir())
/
f'{fds_name}-from-{actor_name}-to-{this_actor.name}.sock'
)
# having a socket path length > 100 aprox can cause:
# OSError: AF_UNIX path too long
# https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_un.h.html#tag_13_67_04
# attempt sock path creation with smaller names
if len(sock_path) > 100:
sock_path = str(
Path(tempfile.gettempdir())
/
f'{fds_name}-to-{this_actor.name}.sock'
)
if len(sock_path) > 100:
# just use uuid4
sock_path = str(
Path(tempfile.gettempdir())
/
f'pass-fds-{uuid4()}.sock'
)
async with (
tractor.find_actor(actor_name) as portal,
portal.open_context(
_pass_fds,
name=fds_name,
sock_path=sock_path
) as (ctx, fds_info),
):
# get original FDs
og_fds = fds_info
# retrieve copies of FDs
fds = await recv_fds(sock_path, len(og_fds))
log.info(
f'{this_actor.name} received fds: {og_fds} -> {fds}'
)
return fds

View File

@ -14,7 +14,7 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. # along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
Linux specifics, for now we are only exposing EventFD Expose libc eventfd APIs
''' '''
import os import os
@ -108,6 +108,10 @@ def close_eventfd(fd: int) -> int:
raise OSError(errno.errorcode[ffi.errno], 'close failed') raise OSError(errno.errorcode[ffi.errno], 'close failed')
class EFDReadCancelled(Exception):
...
class EventFD: class EventFD:
''' '''
Use a previously opened eventfd(2), meant to be used in Use a previously opened eventfd(2), meant to be used in
@ -124,27 +128,83 @@ class EventFD:
self._fd: int = fd self._fd: int = fd
self._omode: str = omode self._omode: str = omode
self._fobj = None self._fobj = None
self._cscope: trio.CancelScope | None = None
self._is_closed: bool = True
self._read_lock = trio.StrictFIFOLock()
@property
def closed(self) -> bool:
return self._is_closed
@property @property
def fd(self) -> int | None: def fd(self) -> int | None:
return self._fd return self._fd
def write(self, value: int) -> int: def write(self, value: int) -> int:
if self.closed:
raise trio.ClosedResourceError
return write_eventfd(self._fd, value) return write_eventfd(self._fd, value)
async def read(self) -> int: async def read(self) -> int:
'''
Async wrapper for `read_eventfd(self.fd)`
`trio.to_thread.run_sync` is used, need to use a `trio.CancelScope`
in order to make it cancellable when `self.close()` is called.
'''
if self.closed:
raise trio.ClosedResourceError
if self._read_lock.locked():
raise trio.BusyResourceError
async with self._read_lock:
self._cscope = trio.CancelScope()
with self._cscope:
try:
return await trio.to_thread.run_sync( return await trio.to_thread.run_sync(
read_eventfd, self._fd, read_eventfd, self._fd,
abandon_on_cancel=True abandon_on_cancel=True
) )
except OSError as e:
if e.errno != errno.EBADF:
raise
raise trio.BrokenResourceError
if self._cscope.cancelled_caught:
raise EFDReadCancelled
self._cscope = None
def read_nowait(self) -> int:
'''
Direct call to `read_eventfd(self.fd)`, unless `eventfd` was
opened with `EFD_NONBLOCK` its gonna block the thread.
'''
return read_eventfd(self._fd)
def open(self): def open(self):
self._fobj = os.fdopen(self._fd, self._omode) self._fobj = os.fdopen(self._fd, self._omode)
self._is_closed = False
def close(self): def close(self):
if self._fobj: if self._fobj:
try:
self._fobj.close() self._fobj.close()
except OSError:
...
if self._cscope:
self._cscope.cancel()
self._is_closed = True
def __enter__(self): def __enter__(self):
self.open() self.open()
return self return self

View File

@ -39,13 +39,11 @@ from contextvars import (
) )
import textwrap import textwrap
from typing import ( from typing import (
Any,
Callable,
Protocol,
Type,
TYPE_CHECKING, TYPE_CHECKING,
TypeVar, Any,
Type,
Union, Union,
Callable,
) )
from types import ModuleType from types import ModuleType
@ -54,6 +52,13 @@ from msgspec import (
msgpack, msgpack,
Raw, Raw,
) )
from msgspec.inspect import (
CustomType,
UnionType,
SetType,
ListType,
TupleType
)
# TODO: see notes below from @mikenerone.. # TODO: see notes below from @mikenerone..
# from tricycle import TreeVar # from tricycle import TreeVar
@ -81,7 +86,7 @@ class MsgDec(Struct):
''' '''
_dec: msgpack.Decoder _dec: msgpack.Decoder
# _ext_types_box: Struct|None = None _ext_types_boxes: dict[Type, Struct] = {}
@property @property
def dec(self) -> msgpack.Decoder: def dec(self) -> msgpack.Decoder:
@ -226,6 +231,8 @@ def mk_dec(
f'ext_types = {ext_types!r}\n' f'ext_types = {ext_types!r}\n'
) )
_boxed_structs: dict[Type, Struct] = {}
if dec_hook: if dec_hook:
if ext_types is None: if ext_types is None:
raise TypeError( raise TypeError(
@ -237,17 +244,15 @@ def mk_dec(
f'ext_types = {ext_types!r}\n' f'ext_types = {ext_types!r}\n'
) )
# XXX, i *thought* we would require a boxing struct as per docs, if len(ext_types) > 1:
# https://jcristharif.com/msgspec/extending.html#mapping-to-from-native-types _boxed_structs = mk_boxed_ext_structs(ext_types)
# |_ see comment, ext_types = [
# > Note that typed deserialization is required for etype
# > successful roundtripping here, so we pass `MyMessage` to for etype in ext_types
# > `Decoder`. if etype not in _boxed_structs
# ]
# BUT, turns out as long as you spec a union with `Raw` it ext_types += list(_boxed_structs.values())
# will work? kk B)
#
# maybe_box_struct = mk_boxed_ext_struct(ext_types)
spec = Raw | Union[*ext_types] spec = Raw | Union[*ext_types]
return MsgDec( return MsgDec(
@ -255,29 +260,26 @@ def mk_dec(
type=spec, # like `MsgType[Any]` type=spec, # like `MsgType[Any]`
dec_hook=dec_hook, dec_hook=dec_hook,
), ),
_ext_types_boxes=_boxed_structs
) )
# TODO? remove since didn't end up needing this? def mk_boxed_ext_structs(
def mk_boxed_ext_struct(
ext_types: list[Type], ext_types: list[Type],
) -> Struct: ) -> dict[Type, Struct]:
# NOTE, originally was to wrap non-msgpack-supported "extension box_types: dict[Type, Struct] = {}
# types" in a field-typed boxing struct, see notes around the for ext_type in ext_types:
# `dec_hook()` branch in `mk_dec()`. info = msgspec.inspect.type_info(ext_type)
ext_types_union = Union[*ext_types] if isinstance(info, CustomType):
repr_ext_types_union: str = ( box_types[ext_type] = msgspec.defstruct(
str(ext_types_union) f'Box{ext_type.__name__}',
or tag=True,
"|".join(ext_types)
)
BoxedExtType = msgspec.defstruct(
f'BoxedExts[{repr_ext_types_union}]',
fields=[ fields=[
('boxed', ext_types_union), ('inner', ext_type),
], ],
) )
return BoxedExtType
return box_types
def unpack_spec_types( def unpack_spec_types(
@ -378,7 +380,7 @@ class MsgCodec(Struct):
_dec: msgpack.Decoder _dec: msgpack.Decoder
_pld_spec: Type[Struct]|Raw|Any _pld_spec: Type[Struct]|Raw|Any
# _ext_types_box: Struct|None = None _ext_types_boxes: dict[Type, Struct] = {}
def __repr__(self) -> str: def __repr__(self) -> str:
speclines: str = textwrap.indent( speclines: str = textwrap.indent(
@ -465,45 +467,29 @@ class MsgCodec(Struct):
''' '''
__tracebackhide__: bool = hide_tb __tracebackhide__: bool = hide_tb
try:
box: Struct|None = self._ext_types_boxes.get(type(py_obj), None)
if (
as_ext_type
or
box
):
py_obj = box(inner=py_obj)
if use_buf: if use_buf:
self._enc.encode_into(py_obj, self._buf) self._enc.encode_into(py_obj, self._buf)
return self._buf return self._buf
return self._enc.encode(py_obj) return self._enc.encode(py_obj)
# try:
# return self._enc.encode(py_obj)
# except TypeError as typerr:
# typerr.add_note(
# '|_src error from `msgspec`'
# # f'|_{self._enc.encode!r}'
# )
# raise typerr
# TODO! REMOVE once i'm confident we won't ever need it! except TypeError as typerr:
# typerr.add_note(
# box: Struct = self._ext_types_box '|_src error from `msgspec`'
# if ( # f'|_{self._enc.encode!r}'
# as_ext_type )
# or raise typerr
# (
# # XXX NOTE, auto-detect if the input type
# box
# and
# (ext_types := unpack_spec_types(
# spec=box.__annotations__['boxed'])
# )
# )
# ):
# match py_obj:
# # case PayloadMsg(pld=pld) if (
# # type(pld) in ext_types
# # ):
# # py_obj.pld = box(boxed=py_obj)
# # breakpoint()
# case _ if (
# type(py_obj) in ext_types
# ):
# py_obj = box(boxed=py_obj)
@property @property
def dec(self) -> msgpack.Decoder: def dec(self) -> msgpack.Decoder:
@ -565,11 +551,6 @@ def mk_codec(
enc_hook: Callable|None = None, enc_hook: Callable|None = None,
ext_types: list[Type]|None = None, ext_types: list[Type]|None = None,
# optionally provided msg-decoder from which we pull its,
# |_.dec_hook()
# |_.type
ext_dec: MsgDec|None = None
#
# ?TODO? other params we might want to support # ?TODO? other params we might want to support
# Encoder: # Encoder:
# write_buffer_size=write_buffer_size, # write_buffer_size=write_buffer_size,
@ -597,12 +578,6 @@ def mk_codec(
) )
dec_hook: Callable|None = None dec_hook: Callable|None = None
if ext_dec:
dec: msgspec.Decoder = ext_dec.dec
dec_hook = dec.dec_hook
pld_spec |= dec.type
if ext_types:
pld_spec |= Union[*ext_types]
# (manually) generate a msg-spec (how appropes) for all relevant # (manually) generate a msg-spec (how appropes) for all relevant
# payload-boxing-struct-msg-types, parameterizing the # payload-boxing-struct-msg-types, parameterizing the
@ -630,10 +605,16 @@ def mk_codec(
enc = msgpack.Encoder( enc = msgpack.Encoder(
enc_hook=enc_hook, enc_hook=enc_hook,
) )
boxes = {}
if ext_types and len(ext_types) > 1:
boxes = mk_boxed_ext_structs(ext_types)
codec = MsgCodec( codec = MsgCodec(
_enc=enc, _enc=enc,
_dec=dec, _dec=dec,
_pld_spec=pld_spec, _pld_spec=pld_spec,
_ext_types_boxes=boxes
) )
# sanity on expected backend support # sanity on expected backend support
assert codec.lib.__name__ == libname assert codec.lib.__name__ == libname
@ -809,78 +790,298 @@ def limit_msg_spec(
assert curr_codec is current_codec() assert curr_codec is current_codec()
# XXX: msgspec won't allow this with non-struct custom types '''
# like `NamespacePath`!@! Encoder / Decoder generic hook factory
# @cm
# def extend_msg_spec(
# payload_spec: Union[Type[Struct]],
# ) -> MsgCodec: '''
# '''
# Extend the current `MsgCodec.pld_spec` (type set) by extending
# the payload spec to **include** the types specified by
# `payload_spec`.
# '''
# codec: MsgCodec = current_codec()
# pld_spec: Union[Type] = codec.pld_spec
# extended_spec: Union[Type] = pld_spec|payload_spec
# with limit_msg_spec(payload_types=extended_spec) as ext_codec:
# # import pdbp; pdbp.set_trace()
# assert ext_codec.pld_spec == extended_spec
# yield ext_codec
#
# ^-TODO-^ is it impossible to make something like this orr!?
# TODO: make an auto-custom hook generator from a set of input custom
# types?
# -[ ] below is a proto design using a `TypeCodec` idea?
#
# type var for the expected interchange-lib's
# IPC-transport type when not available as a built-in
# serialization output.
WireT = TypeVar('WireT')
# TODO: some kinda (decorator) API for built-in subtypes # builtins we can have in same pld_spec as custom types
# that builds this implicitly by inspecting the `mro()`? default_builtins = (
class TypeCodec(Protocol): None,
bool,
int,
float,
bytes,
list
)
# spec definition type
TypeSpec = (
Type |
Union[Type] |
list[Type] |
tuple[Type] |
set[Type]
)
class TypeCodec:
''' '''
A per-custom-type wire-transport serialization translator This class describes a way of encoding to or decoding from a "wire type",
description type. objects that have `encode_fn` and `decode_fn` can be used with
`.encode/.decode`.
''' '''
src_type: Type
wire_type: WireT
def encode(obj: Type) -> WireT: def __init__(
... self,
wire_type: Type,
decode_fn: str,
encode_fn: str = 'encode',
):
self._encode_fn: str = encode_fn
self._decode_fn: str = decode_fn
self._wire_type: Type = wire_type
def decode( def __repr__(self) -> str:
obj_type: Type[WireT], return (
obj: WireT, f'{type(self).__name__}('
) -> Type: f'{self._encode_fn}, '
... f'{self._decode_fn}) '
f'-> {self._wire_type}'
)
@property
def encode_fn(self) -> str:
return self._encode_fn
@property
def decode_fn(self) -> str:
return self._decode_fn
@property
def wire_type(self) -> str:
return self._wire_type
def is_type_compat(self, obj: any) -> bool:
return (
hasattr(obj, self._encode_fn)
and
hasattr(obj, self._decode_fn)
)
def encode(self, obj: any) -> any:
return getattr(obj, self._encode_fn)()
def decode(self, cls: Type, raw: any) -> any:
return getattr(cls, self._decode_fn)(raw)
class MsgpackTypeCodec(TypeCodec): '''
... Default codec descriptions for wire types:
- bytes
- str
- int
'''
def mk_codec_hooks( BytesCodec = TypeCodec(
type_codecs: list[TypeCodec], decode_fn='from_bytes',
wire_type=bytes
)
) -> tuple[Callable, Callable]:
StrCodec = TypeCodec(
decode_fn='from_str',
wire_type=str
)
IntCodec = TypeCodec(
decode_fn='from_int',
wire_type=int
)
default_codecs: dict[Type, TypeCodec] = {
bytes: BytesCodec,
str: StrCodec,
int: IntCodec
}
def mk_spec_set(
spec: TypeSpec
) -> set[Type]:
''' '''
Deliver a `enc_hook()`/`dec_hook()` pair which handle Given any of the different spec definitions, always return a `set[Type]`
manual convertion from an input `Type` set such that whenever with each spec type as an item.
the `TypeCodec.filter()` predicate matches the
`TypeCodec.decode()` is called on the input native object by - When passed list|tuple|set do nothing
the `dec_hook()` and whenever the - When passed a single type we wrap it in tuple
`isiinstance(obj, TypeCodec.type)` matches against an - When passed a Union we wrap its inner types in tuple
`enc_hook(obj=obj)` the return value is taken from a
`TypeCodec.encode(obj)` callback.
''' '''
... if not (
isinstance(spec, set)
or
isinstance(spec, list)
or
isinstance(spec, tuple)
):
spec_info = msgspec.inspect.type_info(spec)
match spec_info:
case UnionType():
return set((
t.cls
for t in spec_info.types
))
case _:
return set((spec, ))
return set(spec)
def mk_codec_map_from_spec(
spec: TypeSpec,
codecs: dict[Type, TypeCodec] = default_codecs
) -> dict[Type, TypeCodec]:
'''
Generate a map of spec type -> supported codec
'''
spec: set[Type] = mk_spec_set(spec)
spec_codecs: dict[Type, TypeCodec] = {}
for t in spec:
if t in spec_codecs:
continue
for codec_type in (int, bytes, str):
codec = codecs[codec_type]
if codec.is_type_compat(t):
spec_codecs[t] = codec
break
return spec_codecs
def mk_enc_hook(
spec: TypeSpec,
with_builtins: bool = True,
builtins: set[Type] = default_builtins,
codecs: dict[Type, TypeCodec] = default_codecs
) -> Callable:
'''
Given a type specification return a msgspec enc_hook fn
'''
spec_codecs = mk_codec_map_from_spec(spec)
def enc_hook(obj: any) -> any:
try:
t = type(obj)
maybe_codec = spec_codecs.get(t, None)
if maybe_codec:
return maybe_codec.encode(obj)
# passthrough builtins
if builtins and t in builtins:
return obj
raise NotImplementedError(
f"Objects of type {type(obj)} are not supported:\n{obj}"
)
except* Exception as e:
e.add_note(f'enc_hook: {t}, {type(obj)} {obj}')
raise
return enc_hook
def mk_dec_hook(
spec: TypeSpec,
with_builtins: bool = True,
builtins: set[Type] = default_builtins,
codecs: dict[Type, TypeCodec] = default_codecs
) -> Callable:
'''
Given a type specification return a msgspec dec_hook fn
'''
spec_codecs = mk_codec_map_from_spec(spec)
def dec_hook(t: Type, obj: any) -> any:
try:
if t is type(obj):
return obj
maybe_codec = spec_codecs.get(t, None)
if maybe_codec:
return maybe_codec.decode(t, obj)
# passthrough builtins
if builtins and type(obj) in builtins:
return obj
raise NotImplementedError(
f"Objects of type {type} are not supported from {obj}"
)
except* Exception as e:
e.add_note(f'dec_hook: {t}, {type(obj)} {obj}')
raise
return dec_hook
def mk_codec_hooks(*args, **kwargs) -> tuple[Callable, Callable]:
'''
Given a type specification return a msgspec enc & dec hook fn pair
'''
return (
mk_enc_hook(*args, **kwargs),
mk_dec_hook(*args, **kwargs)
)
def mk_codec_from_spec(
spec: TypeSpec,
with_builtins: bool = True,
builtins: set[Type] = default_builtins,
codecs: dict[Type, TypeCodec] = default_codecs
) -> MsgCodec:
'''
Given a type specification return a MsgCodec
'''
spec: set[Type] = mk_spec_set(spec)
return mk_codec(
enc_hook=mk_enc_hook(
spec,
with_builtins=with_builtins,
builtins=builtins,
codecs=codecs
),
ext_types=spec
)
def mk_msgpack_codec(
spec: TypeSpec,
with_builtins: bool = True,
builtins: set[Type] = default_builtins,
codecs: dict[Type, TypeCodec] = default_codecs
) -> tuple[msgpack.Encoder, msgpack.Decoder]:
'''
Get a msgpack Encoder, Decoder pair for a given type spec
'''
enc_hook, dec_hook = mk_codec_hooks(
spec,
with_builtins=with_builtins,
builtins=builtins,
codecs=codecs
)
encoder = msgpack.Encoder(enc_hook=enc_hook)
decoder = msgpack.Decoder(spec, dec_hook=dec_hook)
return encoder, decoder

View File

@ -31,6 +31,7 @@ from typing import (
Type, Type,
TypeVar, TypeVar,
TypeAlias, TypeAlias,
# TYPE_CHECKING,
Union, Union,
) )
@ -47,6 +48,7 @@ from tractor.msg import (
pretty_struct, pretty_struct,
) )
from tractor.log import get_logger from tractor.log import get_logger
# from tractor._addr import UnwrappedAddress
log = get_logger('tractor.msgspec') log = get_logger('tractor.msgspec')
@ -141,9 +143,16 @@ class Aid(
''' '''
name: str name: str
uuid: str uuid: str
# TODO: use built-in support for UUIDs? pid: int|None = None
# -[ ] `uuid.UUID` which has multi-protocol support
# TODO? can/should we extend this field set?
# -[ ] use built-in support for UUIDs? `uuid.UUID` which has
# multi-protocol support
# https://jcristharif.com/msgspec/supported-types.html#uuid # https://jcristharif.com/msgspec/supported-types.html#uuid
#
# -[ ] as per the `.ipc._uds` / `._addr` comments, maybe we
# should also include at least `.pid` (equiv to port for tcp)
# and/or host-part always?
class SpawnSpec( class SpawnSpec(
@ -167,8 +176,8 @@ class SpawnSpec(
# TODO: not just sockaddr pairs? # TODO: not just sockaddr pairs?
# -[ ] abstract into a `TransportAddr` type? # -[ ] abstract into a `TransportAddr` type?
reg_addrs: list[tuple[str, int]] reg_addrs: list[tuple[str, str|int]]
bind_addrs: list[tuple[str, int]] bind_addrs: list[tuple[str, str|int]]|None
# TODO: caps based RPC support in the payload? # TODO: caps based RPC support in the payload?

View File

@ -32,3 +32,8 @@ from ._broadcast import (
from ._beg import ( from ._beg import (
collapse_eg as collapse_eg, collapse_eg as collapse_eg,
) )
from ._ordering import (
order_send_channel as order_send_channel,
order_receive_channel as order_receive_channel
)

View File

@ -70,6 +70,7 @@ async def maybe_open_nursery(
yield nursery yield nursery
else: else:
async with lib.open_nursery(**kwargs) as nursery: async with lib.open_nursery(**kwargs) as nursery:
if lib == trio:
nursery.cancel_scope.shield = shield nursery.cancel_scope.shield = shield
yield nursery yield nursery

View File

@ -0,0 +1,108 @@
# tractor: structured concurrent "actors".
# Copyright 2018-eternity Tyler Goodlet.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
Helpers to guarantee ordering of messages through a unordered channel
'''
from __future__ import annotations
from heapq import (
heappush,
heappop
)
import trio
import msgspec
class OrderedPayload(msgspec.Struct, frozen=True):
index: int
payload: bytes
@classmethod
def from_msg(cls, msg: bytes) -> OrderedPayload:
return msgspec.msgpack.decode(msg, type=OrderedPayload)
def encode(self) -> bytes:
return msgspec.msgpack.encode(self)
def order_send_channel(
channel: trio.abc.SendChannel[bytes],
start_index: int = 0
):
next_index = start_index
send_lock = trio.StrictFIFOLock()
channel._send = channel.send
channel._aclose = channel.aclose
async def send(msg: bytes):
nonlocal next_index
async with send_lock:
await channel._send(
OrderedPayload(
index=next_index,
payload=msg
).encode()
)
next_index += 1
async def aclose():
async with send_lock:
await channel._aclose()
channel.send = send
channel.aclose = aclose
def order_receive_channel(
channel: trio.abc.ReceiveChannel[bytes],
start_index: int = 0
):
next_index = start_index
pqueue = []
channel._receive = channel.receive
def can_pop_next() -> bool:
return (
len(pqueue) > 0
and
pqueue[0][0] == next_index
)
async def drain_to_heap():
while not can_pop_next():
msg = await channel._receive()
msg = OrderedPayload.from_msg(msg)
heappush(pqueue, (msg.index, msg.payload))
def pop_next():
nonlocal next_index
_, msg = heappop(pqueue)
next_index += 1
return msg
async def receive() -> bytes:
if can_pop_next():
return pop_next()
await drain_to_heap()
return pop_next()
channel.receive = receive

305
uv.lock
View File

@ -3,12 +3,30 @@ revision = 1
requires-python = ">=3.11" requires-python = ">=3.11"
[[package]] [[package]]
name = "attrs" name = "async-generator"
version = "24.3.0" version = "1.10"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/48/c8/6260f8ccc11f0917360fc0da435c5c9c7504e3db174d5a12a1494887b045/attrs-24.3.0.tar.gz", hash = "sha256:8f5c07333d543103541ba7be0e2ce16eeee8130cb0b3f9238ab904ce1e85baff", size = 805984 } sdist = { url = "https://files.pythonhosted.org/packages/ce/b6/6fa6b3b598a03cba5e80f829e0dadbb49d7645f523d209b2fb7ea0bbb02a/async_generator-1.10.tar.gz", hash = "sha256:6ebb3d106c12920aaae42ccb6f787ef5eefdcdd166ea3d628fa8476abe712144", size = 29870 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/89/aa/ab0f7891a01eeb2d2e338ae8fecbe57fcebea1a24dbb64d45801bfab481d/attrs-24.3.0-py3-none-any.whl", hash = "sha256:ac96cd038792094f438ad1f6ff80837353805ac950cd2aa0e0625ef19850c308", size = 63397 }, { url = "https://files.pythonhosted.org/packages/71/52/39d20e03abd0ac9159c162ec24b93fbcaa111e8400308f2465432495ca2b/async_generator-1.10-py3-none-any.whl", hash = "sha256:01c7bf666359b4967d2cda0000cc2e4af16a0ae098cbffcb8472fb9e8ad6585b", size = 18857 },
]
[[package]]
name = "attrs"
version = "25.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/5a/b0/1367933a8532ee6ff8d63537de4f1177af4bff9f3e829baf7331f595bb24/attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b", size = 812032 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/77/06/bb80f5f86020c4551da315d78b3ab75e8228f89f0162f2c3a819e407941a/attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3", size = 63815 },
]
[[package]]
name = "bidict"
version = "0.23.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/9a/6e/026678aa5a830e07cd9498a05d3e7e650a4f56a42f267a53d22bcda1bdc9/bidict-0.23.1.tar.gz", hash = "sha256:03069d763bc387bbd20e7d49914e75fc4132a41937fa3405417e1a5a2d006d71", size = 29093 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/99/37/e8730c3587a65eb5645d4aba2d27aae48e8003614d6aaf15dda67f702f1f/bidict-0.23.1-py3-none-any.whl", hash = "sha256:5dae8d4d79b552a71cbabc7deb25dfe8ce710b17ff41711e13010ead2abfc3e5", size = 32764 },
] ]
[[package]] [[package]]
@ -93,44 +111,45 @@ wheels = [
[[package]] [[package]]
name = "greenlet" name = "greenlet"
version = "3.1.1" version = "3.2.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/2f/ff/df5fede753cc10f6a5be0931204ea30c35fa2f2ea7a35b25bdaf4fe40e46/greenlet-3.1.1.tar.gz", hash = "sha256:4ce3ac6cdb6adf7946475d7ef31777c26d94bccc377e070a7986bd2d5c515467", size = 186022 } sdist = { url = "https://files.pythonhosted.org/packages/b0/9c/666d8c71b18d0189cf801c0e0b31c4bfc609ac823883286045b1f3ae8994/greenlet-3.2.0.tar.gz", hash = "sha256:1d2d43bd711a43db8d9b9187500e6432ddb4fafe112d082ffabca8660a9e01a7", size = 183685 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/28/62/1c2665558618553c42922ed47a4e6d6527e2fa3516a8256c2f431c5d0441/greenlet-3.1.1-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:e4d333e558953648ca09d64f13e6d8f0523fa705f51cae3f03b5983489958c70", size = 272479 }, { url = "https://files.pythonhosted.org/packages/2d/d3/0a25528e54eca3c57524d2ef1f63283c8c6db466c785218036ab7fc2d4ff/greenlet-3.2.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:b99de16560097b9984409ded0032f101f9555e1ab029440fc6a8b5e76dbba7ac", size = 268620 },
{ url = "https://files.pythonhosted.org/packages/76/9d/421e2d5f07285b6e4e3a676b016ca781f63cfe4a0cd8eaecf3fd6f7a71ae/greenlet-3.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09fc016b73c94e98e29af67ab7b9a879c307c6731a2c9da0db5a7d9b7edd1159", size = 640404 }, { url = "https://files.pythonhosted.org/packages/ff/40/f937eb7c1e641ca12089265c57874fcdd173c6c8aabdec3a494641d81eb9/greenlet-3.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0bc5776ac2831c022e029839bf1b9d3052332dcf5f431bb88c8503e27398e31", size = 628787 },
{ url = "https://files.pythonhosted.org/packages/e5/de/6e05f5c59262a584e502dd3d261bbdd2c97ab5416cc9c0b91ea38932a901/greenlet-3.1.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d5e975ca70269d66d17dd995dafc06f1b06e8cb1ec1e9ed54c1d1e4a7c4cf26e", size = 652813 }, { url = "https://files.pythonhosted.org/packages/12/8d/f248691502cb85ce8b18d442032dbde5d3dd16ff2d15593cbee33c40f29c/greenlet-3.2.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1dcb1108449b55ff6bc0edac9616468f71db261a4571f27c47ccf3530a7f8b97", size = 640838 },
{ url = "https://files.pythonhosted.org/packages/49/93/d5f93c84241acdea15a8fd329362c2c71c79e1a507c3f142a5d67ea435ae/greenlet-3.1.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b2813dc3de8c1ee3f924e4d4227999285fd335d1bcc0d2be6dc3f1f6a318ec1", size = 648517 }, { url = "https://files.pythonhosted.org/packages/d5/f1/2a572bf4fc667e8835ed8c4ef8b729eccd0666ed9e6db8c61c5796fd2dc9/greenlet-3.2.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:82a68a25a08f51fc8b66b113d1d9863ee123cdb0e8f1439aed9fc795cd6f85cf", size = 636760 },
{ url = "https://files.pythonhosted.org/packages/15/85/72f77fc02d00470c86a5c982b8daafdf65d38aefbbe441cebff3bf7037fc/greenlet-3.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e347b3bfcf985a05e8c0b7d462ba6f15b1ee1c909e2dcad795e49e91b152c383", size = 647831 }, { url = "https://files.pythonhosted.org/packages/12/d6/f9ecc8dcb17516a0f4ab91df28497303e8d2d090d509fe3e1b1a85b23e90/greenlet-3.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7fee6f518868e8206c617f4084a83ad4d7a3750b541bf04e692dfa02e52e805d", size = 636001 },
{ url = "https://files.pythonhosted.org/packages/f7/4b/1c9695aa24f808e156c8f4813f685d975ca73c000c2a5056c514c64980f6/greenlet-3.1.1-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9e8f8c9cb53cdac7ba9793c276acd90168f416b9ce36799b9b885790f8ad6c0a", size = 602413 }, { url = "https://files.pythonhosted.org/packages/fc/b2/28ab943ff898d6aad3e0ab88fad722c892a43375fabb9789dcc29075da36/greenlet-3.2.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6fad8a9ca98b37951a053d7d2d2553569b151cd8c4ede744806b94d50d7f8f73", size = 583936 },
{ url = "https://files.pythonhosted.org/packages/76/70/ad6e5b31ef330f03b12559d19fda2606a522d3849cde46b24f223d6d1619/greenlet-3.1.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:62ee94988d6b4722ce0028644418d93a52429e977d742ca2ccbe1c4f4a792511", size = 1129619 }, { url = "https://files.pythonhosted.org/packages/44/a8/dedd1517fae684c3c08ff53ab8b03e328015da4b52d2bd993279ac3a8c3d/greenlet-3.2.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:0e14541f9024a280adb9645143d6a0a51fda6f7c5695fd96cb4d542bb563442f", size = 1112901 },
{ url = "https://files.pythonhosted.org/packages/f4/fb/201e1b932e584066e0f0658b538e73c459b34d44b4bd4034f682423bc801/greenlet-3.1.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1776fd7f989fc6b8d8c8cb8da1f6b82c5814957264d1f6cf818d475ec2bf6395", size = 1155198 }, { url = "https://files.pythonhosted.org/packages/45/23/15cf5d4bc864c3dc0dcb708bcaa81cd1a3dc2012326d32ad8a46d77a645e/greenlet-3.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7f163d04f777e7bd229a50b937ecc1ae2a5b25296e6001445e5433e4f51f5191", size = 1138328 },
{ url = "https://files.pythonhosted.org/packages/12/da/b9ed5e310bb8b89661b80cbcd4db5a067903bbcd7fc854923f5ebb4144f0/greenlet-3.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:48ca08c771c268a768087b408658e216133aecd835c0ded47ce955381105ba39", size = 298930 }, { url = "https://files.pythonhosted.org/packages/ba/82/c7cf91e89451a922c049ac1f0123de091260697e26e8b98d299555ad96a5/greenlet-3.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:39801e633a978c3f829f21022501e7b0c3872683d7495c1850558d1a6fb95ed0", size = 295415 },
{ url = "https://files.pythonhosted.org/packages/7d/ec/bad1ac26764d26aa1353216fcbfa4670050f66d445448aafa227f8b16e80/greenlet-3.1.1-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:4afe7ea89de619adc868e087b4d2359282058479d7cfb94970adf4b55284574d", size = 274260 }, { url = "https://files.pythonhosted.org/packages/0e/8d/3c55e88ab01866fb696f68d6c94587a1b7ec8c8a9c56b1383ad05bc14811/greenlet-3.2.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:7d08b88ee8d506ca1f5b2a58744e934d33c6a1686dd83b81e7999dfc704a912f", size = 270391 },
{ url = "https://files.pythonhosted.org/packages/66/d4/c8c04958870f482459ab5956c2942c4ec35cac7fe245527f1039837c17a9/greenlet-3.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f406b22b7c9a9b4f8aa9d2ab13d6ae0ac3e85c9a809bd590ad53fed2bf70dc79", size = 649064 }, { url = "https://files.pythonhosted.org/packages/8b/6f/4a15185a386992ba4fbb55f88c1a189b75c7ce6e145b43ae4e50754d1969/greenlet-3.2.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58ef3d637c54e2f079064ca936556c4af3989144e4154d80cfd4e2a59fc3769c", size = 637202 },
{ url = "https://files.pythonhosted.org/packages/51/41/467b12a8c7c1303d20abcca145db2be4e6cd50a951fa30af48b6ec607581/greenlet-3.1.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c3a701fe5a9695b238503ce5bbe8218e03c3bcccf7e204e455e7462d770268aa", size = 663420 }, { url = "https://files.pythonhosted.org/packages/71/f8/60214debfe3b9670bafac97bfc40e318cbddb4ff4b5cf07df119c4a56dcd/greenlet-3.2.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:33ea7e7269d6f7275ce31f593d6dcfedd97539c01f63fbdc8d84e493e20b1b2c", size = 651391 },
{ url = "https://files.pythonhosted.org/packages/27/8f/2a93cd9b1e7107d5c7b3b7816eeadcac2ebcaf6d6513df9abaf0334777f6/greenlet-3.1.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2846930c65b47d70b9d178e89c7e1a69c95c1f68ea5aa0a58646b7a96df12441", size = 658035 }, { url = "https://files.pythonhosted.org/packages/a9/44/fb5e067a728a4df73a30863973912ba6eb01f3d910caaf129ef789ca222d/greenlet-3.2.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e61d426969b68b2170a9f853cc36d5318030494576e9ec0bfe2dc2e2afa15a68", size = 646118 },
{ url = "https://files.pythonhosted.org/packages/57/5c/7c6f50cb12be092e1dccb2599be5a942c3416dbcfb76efcf54b3f8be4d8d/greenlet-3.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:99cfaa2110534e2cf3ba31a7abcac9d328d1d9f1b95beede58294a60348fba36", size = 660105 }, { url = "https://files.pythonhosted.org/packages/f0/3e/f329b452869d8bc07dbaa112c0175de5e666a7d15eb243781481fb59b863/greenlet-3.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:04e781447a4722e30b4861af728cb878d73a3df79509dc19ea498090cea5d204", size = 648079 },
{ url = "https://files.pythonhosted.org/packages/f1/66/033e58a50fd9ec9df00a8671c74f1f3a320564c6415a4ed82a1c651654ba/greenlet-3.1.1-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1443279c19fca463fc33e65ef2a935a5b09bb90f978beab37729e1c3c6c25fe9", size = 613077 }, { url = "https://files.pythonhosted.org/packages/56/e5/813a2e8e842289579391cbd3ae6e6e6a3d2fcad8bdd89bd549a4035ab057/greenlet-3.2.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b2392cc41eeed4055978c6b52549ccd9effd263bb780ffd639c0e1e7e2055ab0", size = 603825 },
{ url = "https://files.pythonhosted.org/packages/19/c5/36384a06f748044d06bdd8776e231fadf92fc896bd12cb1c9f5a1bda9578/greenlet-3.1.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b7cede291382a78f7bb5f04a529cb18e068dd29e0fb27376074b6d0317bf4dd0", size = 1135975 }, { url = "https://files.pythonhosted.org/packages/4a/11/0bad66138622d0c1463b0b87935cefd397f9f04fac325a838525a3aa4da7/greenlet-3.2.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:430cba962c85e339767235a93450a6aaffed6f9c567e73874ea2075f5aae51e1", size = 1119582 },
{ url = "https://files.pythonhosted.org/packages/38/f9/c0a0eb61bdf808d23266ecf1d63309f0e1471f284300ce6dac0ae1231881/greenlet-3.1.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:23f20bb60ae298d7d8656c6ec6db134bca379ecefadb0b19ce6f19d1f232a942", size = 1163955 }, { url = "https://files.pythonhosted.org/packages/17/26/0f8a4d222b9014af88bb8b5d921305308dd44de667c01714817dc9fb91fb/greenlet-3.2.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5e57ff52315bfc0c5493917f328b8ba3ae0c0515d94524453c4d24e7638cbb53", size = 1147452 },
{ url = "https://files.pythonhosted.org/packages/43/21/a5d9df1d21514883333fc86584c07c2b49ba7c602e670b174bd73cfc9c7f/greenlet-3.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:7124e16b4c55d417577c2077be379514321916d5790fa287c9ed6f23bd2ffd01", size = 299655 }, { url = "https://files.pythonhosted.org/packages/8a/d4/70d262492338c4939f97dca310c45b002a3af84b265720f0e9b135bc85b2/greenlet-3.2.0-cp312-cp312-win_amd64.whl", hash = "sha256:211a9721f540e454a02e62db7956263e9a28a6cf776d4b9a7213844e36426333", size = 296217 },
{ url = "https://files.pythonhosted.org/packages/f3/57/0db4940cd7bb461365ca8d6fd53e68254c9dbbcc2b452e69d0d41f10a85e/greenlet-3.1.1-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:05175c27cb459dcfc05d026c4232f9de8913ed006d42713cb8a5137bd49375f1", size = 272990 }, { url = "https://files.pythonhosted.org/packages/c9/43/c0b655d4d7eae19282b028bcec449e5c80626ad0d8d0ca3703f9b1c29258/greenlet-3.2.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:b86a3ccc865ae601f446af042707b749eebc297928ea7bd0c5f60c56525850be", size = 269131 },
{ url = "https://files.pythonhosted.org/packages/1c/ec/423d113c9f74e5e402e175b157203e9102feeb7088cee844d735b28ef963/greenlet-3.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:935e943ec47c4afab8965954bf49bfa639c05d4ccf9ef6e924188f762145c0ff", size = 649175 }, { url = "https://files.pythonhosted.org/packages/7c/7d/c8f51c373c7f7ac0f73d04a6fd77ab34f6f643cb41a0d186d05ba96708e7/greenlet-3.2.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:144283ad88ed77f3ebd74710dd419b55dd15d18704b0ae05935766a93f5671c5", size = 637323 },
{ url = "https://files.pythonhosted.org/packages/a9/46/ddbd2db9ff209186b7b7c621d1432e2f21714adc988703dbdd0e65155c77/greenlet-3.1.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:667a9706c970cb552ede35aee17339a18e8f2a87a51fba2ed39ceeeb1004798a", size = 663425 }, { url = "https://files.pythonhosted.org/packages/89/65/c3ee41b2e56586737d6e124b250583695628ffa6b324855b3a1267a8d1d9/greenlet-3.2.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5be69cd50994b8465c3ad1467f9e63001f76e53a89440ad4440d1b6d52591280", size = 651430 },
{ url = "https://files.pythonhosted.org/packages/bc/f9/9c82d6b2b04aa37e38e74f0c429aece5eeb02bab6e3b98e7db89b23d94c6/greenlet-3.1.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b8a678974d1f3aa55f6cc34dc480169d58f2e6d8958895d68845fa4ab566509e", size = 657736 }, { url = "https://files.pythonhosted.org/packages/f0/07/33bd7a3dcde1db7259371d026ce76be1eb653d2d892334fc79a500b3c5ee/greenlet-3.2.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:47aeadd1e8fbdef8fdceb8fb4edc0cbb398a57568d56fd68f2bc00d0d809e6b6", size = 645798 },
{ url = "https://files.pythonhosted.org/packages/d9/42/b87bc2a81e3a62c3de2b0d550bf91a86939442b7ff85abb94eec3fc0e6aa/greenlet-3.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efc0f674aa41b92da8c49e0346318c6075d734994c3c4e4430b1c3f853e498e4", size = 660347 }, { url = "https://files.pythonhosted.org/packages/35/5b/33c221a6a867030b0b770513a1b78f6c30e04294131dafdc8da78906bbe6/greenlet-3.2.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18adc14ab154ca6e53eecc9dc50ff17aeb7ba70b7e14779b26e16d71efa90038", size = 648271 },
{ url = "https://files.pythonhosted.org/packages/37/fa/71599c3fd06336cdc3eac52e6871cfebab4d9d70674a9a9e7a482c318e99/greenlet-3.1.1-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0153404a4bb921f0ff1abeb5ce8a5131da56b953eda6e14b88dc6bbc04d2049e", size = 615583 }, { url = "https://files.pythonhosted.org/packages/4d/dd/d6452248fa6093504e3b7525dc2bdc4e55a4296ec6ee74ba241a51d852e2/greenlet-3.2.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e8622b33d8694ec373ad55050c3d4e49818132b44852158442e1931bb02af336", size = 606779 },
{ url = "https://files.pythonhosted.org/packages/4e/96/e9ef85de031703ee7a4483489b40cf307f93c1824a02e903106f2ea315fe/greenlet-3.1.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:275f72decf9932639c1c6dd1013a1bc266438eb32710016a1c742df5da6e60a1", size = 1133039 }, { url = "https://files.pythonhosted.org/packages/9d/24/160f04d2589bcb15b8661dcd1763437b22e01643626899a4139bf98f02af/greenlet-3.2.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:e8ac9a2c20fbff3d0b853e9ef705cdedb70d9276af977d1ec1cde86a87a4c821", size = 1117968 },
{ url = "https://files.pythonhosted.org/packages/87/76/b2b6362accd69f2d1889db61a18c94bc743e961e3cab344c2effaa4b4a25/greenlet-3.1.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:c4aab7f6381f38a4b42f269057aee279ab0fc7bf2e929e3d4abfae97b682a12c", size = 1160716 }, { url = "https://files.pythonhosted.org/packages/6c/ff/c6e3f3a5168fef5209cfd9498b2b5dd77a0bf29dfc686a03dcc614cf4432/greenlet-3.2.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:cd37273dc7ca1d5da149b58c8b3ce0711181672ba1b09969663905a765affe21", size = 1145510 },
{ url = "https://files.pythonhosted.org/packages/1f/1b/54336d876186920e185066d8c3024ad55f21d7cc3683c856127ddb7b13ce/greenlet-3.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:b42703b1cf69f2aa1df7d1030b9d77d3e584a70755674d60e710f0af570f3761", size = 299490 }, { url = "https://files.pythonhosted.org/packages/dc/62/5215e374819052e542b5bde06bd7d4a171454b6938c96a2384f21cb94279/greenlet-3.2.0-cp313-cp313-win_amd64.whl", hash = "sha256:8a8940a8d301828acd8b9f3f85db23069a692ff2933358861b19936e29946b95", size = 296004 },
{ url = "https://files.pythonhosted.org/packages/5f/17/bea55bf36990e1638a2af5ba10c1640273ef20f627962cf97107f1e5d637/greenlet-3.1.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f1695e76146579f8c06c1509c7ce4dfe0706f49c6831a817ac04eebb2fd02011", size = 643731 }, { url = "https://files.pythonhosted.org/packages/62/6d/dc9c909cba5cbf4b0833fce69912927a8ca74791c23c47b9fd4f28092108/greenlet-3.2.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee59db626760f1ca8da697a086454210d36a19f7abecc9922a2374c04b47735b", size = 629900 },
{ url = "https://files.pythonhosted.org/packages/78/d2/aa3d2157f9ab742a08e0fd8f77d4699f37c22adfbfeb0c610a186b5f75e0/greenlet-3.1.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7876452af029456b3f3549b696bb36a06db7c90747740c5302f74a9e9fa14b13", size = 649304 }, { url = "https://files.pythonhosted.org/packages/5e/a9/f3f304fbbbd604858ff3df303d7fa1d8f7f9e45a6ef74481aaf03aaac021/greenlet-3.2.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7154b13ef87a8b62fc05419f12d75532d7783586ad016c57b5de8a1c6feeb517", size = 635270 },
{ url = "https://files.pythonhosted.org/packages/f1/8e/d0aeffe69e53ccff5a28fa86f07ad1d2d2d6537a9506229431a2a02e2f15/greenlet-3.1.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4ead44c85f8ab905852d3de8d86f6f8baf77109f9da589cb4fa142bd3b57b475", size = 646537 }, { url = "https://files.pythonhosted.org/packages/34/92/4b7b4e2e23ecc723cceef9fe3898e78c8e14e106cc7ba2f276a66161da3e/greenlet-3.2.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:199453d64b02d0c9d139e36d29681efd0e407ed8e2c0bf89d88878d6a787c28f", size = 632534 },
{ url = "https://files.pythonhosted.org/packages/05/79/e15408220bbb989469c8871062c97c6c9136770657ba779711b90870d867/greenlet-3.1.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8320f64b777d00dd7ccdade271eaf0cad6636343293a25074cc5566160e4de7b", size = 642506 }, { url = "https://files.pythonhosted.org/packages/da/7f/91f0ecbe72c9d789fb7f400b39da9d1e87fcc2cf8746a9636479ba79ab01/greenlet-3.2.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0010e928e1901d36625f21d008618273f9dda26b516dbdecf873937d39c9dff0", size = 628826 },
{ url = "https://files.pythonhosted.org/packages/18/87/470e01a940307796f1d25f8167b551a968540fbe0551c0ebb853cb527dd6/greenlet-3.1.1-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6510bf84a6b643dabba74d3049ead221257603a253d0a9873f55f6a59a65f822", size = 602753 }, { url = "https://files.pythonhosted.org/packages/9f/59/e449a44ce52b13751f55376d85adc155dd311608f6d2aa5b6bd2c8d15486/greenlet-3.2.0-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6005f7a86de836a1dc4b8d824a2339cdd5a1ca7cb1af55ea92575401f9952f4c", size = 593697 },
{ url = "https://files.pythonhosted.org/packages/e2/72/576815ba674eddc3c25028238f74d7b8068902b3968cbe456771b166455e/greenlet-3.1.1-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:04b013dc07c96f83134b1e99888e7a79979f1a247e2a9f59697fa14b5862ed01", size = 1122731 }, { url = "https://files.pythonhosted.org/packages/bb/09/cca3392927c5c990b7a8ede64ccd0712808438d6490d63ce6b8704d6df5f/greenlet-3.2.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:17fd241c0d50bacb7ce8ff77a30f94a2d0ca69434ba2e0187cf95a5414aeb7e1", size = 1105762 },
{ url = "https://files.pythonhosted.org/packages/ac/38/08cc303ddddc4b3d7c628c3039a61a3aae36c241ed01393d00c2fd663473/greenlet-3.1.1-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:411f015496fec93c1c8cd4e5238da364e1da7a124bcb293f085bf2860c32c6f6", size = 1142112 }, { url = "https://files.pythonhosted.org/packages/4d/b9/3d201f819afc3b7a8cd7ebe645f1a17799603e2d62c968154518f79f4881/greenlet-3.2.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:7b17a26abc6a1890bf77d5d6b71c0999705386b00060d15c10b8182679ff2790", size = 1125173 },
{ url = "https://files.pythonhosted.org/packages/80/7b/773a30602234597fc2882091f8e1d1a38ea0b4419d99ca7ed82c827e2c3a/greenlet-3.2.0-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:397b6bbda06f8fe895893d96218cd6f6d855a6701dc45012ebe12262423cec8b", size = 269908 },
] ]
[[package]] [[package]]
@ -143,12 +162,24 @@ wheels = [
] ]
[[package]] [[package]]
name = "iniconfig" name = "importlib-metadata"
version = "2.0.0" version = "8.6.1"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d7/4b/cbd8e699e64a6f16ca3a8220661b5f83792b3017d0f79807cb8708d33913/iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3", size = 4646 } dependencies = [
{ name = "zipp" },
]
sdist = { url = "https://files.pythonhosted.org/packages/33/08/c1395a292bb23fd03bdf572a1357c5a733d3eecbab877641ceacab23db6e/importlib_metadata-8.6.1.tar.gz", hash = "sha256:310b41d755445d74569f993ccfc22838295d9fe005425094fad953d7f15c8580", size = 55767 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/ef/a6/62565a6e1cf69e10f5727360368e451d4b7f58beeac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374", size = 5892 }, { url = "https://files.pythonhosted.org/packages/79/9d/0fb148dc4d6fa4a7dd1d8378168d9b4cd8d4560a6fbf6f0121c5fc34eb68/importlib_metadata-8.6.1-py3-none-any.whl", hash = "sha256:02a89390c1e15fdfdc0d7c6b25cb3e62650d0494005c97d6f148bf5b9787525e", size = 26971 },
]
[[package]]
name = "iniconfig"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050 },
] ]
[[package]] [[package]]
@ -180,6 +211,94 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/23/d8/f15b40611c2d5753d1abb0ca0da0c75348daf1252220e5dda2867bd81062/msgspec-0.19.0-cp313-cp313-win_amd64.whl", hash = "sha256:317050bc0f7739cb30d257ff09152ca309bf5a369854bbf1e57dffc310c1f20f", size = 187432 }, { url = "https://files.pythonhosted.org/packages/23/d8/f15b40611c2d5753d1abb0ca0da0c75348daf1252220e5dda2867bd81062/msgspec-0.19.0-cp313-cp313-win_amd64.whl", hash = "sha256:317050bc0f7739cb30d257ff09152ca309bf5a369854bbf1e57dffc310c1f20f", size = 187432 },
] ]
[[package]]
name = "mypy"
version = "1.15.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "mypy-extensions" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ce/43/d5e49a86afa64bd3839ea0d5b9c7103487007d728e1293f52525d6d5486a/mypy-1.15.0.tar.gz", hash = "sha256:404534629d51d3efea5c800ee7c42b72a6554d6c400e6a79eafe15d11341fd43", size = 3239717 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/03/bc/f6339726c627bd7ca1ce0fa56c9ae2d0144604a319e0e339bdadafbbb599/mypy-1.15.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2922d42e16d6de288022e5ca321cd0618b238cfc5570e0263e5ba0a77dbef56f", size = 10662338 },
{ url = "https://files.pythonhosted.org/packages/e2/90/8dcf506ca1a09b0d17555cc00cd69aee402c203911410136cd716559efe7/mypy-1.15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2ee2d57e01a7c35de00f4634ba1bbf015185b219e4dc5909e281016df43f5ee5", size = 9787540 },
{ url = "https://files.pythonhosted.org/packages/05/05/a10f9479681e5da09ef2f9426f650d7b550d4bafbef683b69aad1ba87457/mypy-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:973500e0774b85d9689715feeffcc980193086551110fd678ebe1f4342fb7c5e", size = 11538051 },
{ url = "https://files.pythonhosted.org/packages/e9/9a/1f7d18b30edd57441a6411fcbc0c6869448d1a4bacbaee60656ac0fc29c8/mypy-1.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5a95fb17c13e29d2d5195869262f8125dfdb5c134dc8d9a9d0aecf7525b10c2c", size = 12286751 },
{ url = "https://files.pythonhosted.org/packages/72/af/19ff499b6f1dafcaf56f9881f7a965ac2f474f69f6f618b5175b044299f5/mypy-1.15.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1905f494bfd7d85a23a88c5d97840888a7bd516545fc5aaedff0267e0bb54e2f", size = 12421783 },
{ url = "https://files.pythonhosted.org/packages/96/39/11b57431a1f686c1aed54bf794870efe0f6aeca11aca281a0bd87a5ad42c/mypy-1.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:c9817fa23833ff189db061e6d2eff49b2f3b6ed9856b4a0a73046e41932d744f", size = 9265618 },
{ url = "https://files.pythonhosted.org/packages/98/3a/03c74331c5eb8bd025734e04c9840532226775c47a2c39b56a0c8d4f128d/mypy-1.15.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:aea39e0583d05124836ea645f412e88a5c7d0fd77a6d694b60d9b6b2d9f184fd", size = 10793981 },
{ url = "https://files.pythonhosted.org/packages/f0/1a/41759b18f2cfd568848a37c89030aeb03534411eef981df621d8fad08a1d/mypy-1.15.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2f2147ab812b75e5b5499b01ade1f4a81489a147c01585cda36019102538615f", size = 9749175 },
{ url = "https://files.pythonhosted.org/packages/12/7e/873481abf1ef112c582db832740f4c11b2bfa510e829d6da29b0ab8c3f9c/mypy-1.15.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ce436f4c6d218a070048ed6a44c0bbb10cd2cc5e272b29e7845f6a2f57ee4464", size = 11455675 },
{ url = "https://files.pythonhosted.org/packages/b3/d0/92ae4cde706923a2d3f2d6c39629134063ff64b9dedca9c1388363da072d/mypy-1.15.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8023ff13985661b50a5928fc7a5ca15f3d1affb41e5f0a9952cb68ef090b31ee", size = 12410020 },
{ url = "https://files.pythonhosted.org/packages/46/8b/df49974b337cce35f828ba6fda228152d6db45fed4c86ba56ffe442434fd/mypy-1.15.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:1124a18bc11a6a62887e3e137f37f53fbae476dc36c185d549d4f837a2a6a14e", size = 12498582 },
{ url = "https://files.pythonhosted.org/packages/13/50/da5203fcf6c53044a0b699939f31075c45ae8a4cadf538a9069b165c1050/mypy-1.15.0-cp312-cp312-win_amd64.whl", hash = "sha256:171a9ca9a40cd1843abeca0e405bc1940cd9b305eaeea2dda769ba096932bb22", size = 9366614 },
{ url = "https://files.pythonhosted.org/packages/6a/9b/fd2e05d6ffff24d912f150b87db9e364fa8282045c875654ce7e32fffa66/mypy-1.15.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:93faf3fdb04768d44bf28693293f3904bbb555d076b781ad2530214ee53e3445", size = 10788592 },
{ url = "https://files.pythonhosted.org/packages/74/37/b246d711c28a03ead1fd906bbc7106659aed7c089d55fe40dd58db812628/mypy-1.15.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:811aeccadfb730024c5d3e326b2fbe9249bb7413553f15499a4050f7c30e801d", size = 9753611 },
{ url = "https://files.pythonhosted.org/packages/a6/ac/395808a92e10cfdac8003c3de9a2ab6dc7cde6c0d2a4df3df1b815ffd067/mypy-1.15.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:98b7b9b9aedb65fe628c62a6dc57f6d5088ef2dfca37903a7d9ee374d03acca5", size = 11438443 },
{ url = "https://files.pythonhosted.org/packages/d2/8b/801aa06445d2de3895f59e476f38f3f8d610ef5d6908245f07d002676cbf/mypy-1.15.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c43a7682e24b4f576d93072216bf56eeff70d9140241f9edec0c104d0c515036", size = 12402541 },
{ url = "https://files.pythonhosted.org/packages/c7/67/5a4268782eb77344cc613a4cf23540928e41f018a9a1ec4c6882baf20ab8/mypy-1.15.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:baefc32840a9f00babd83251560e0ae1573e2f9d1b067719479bfb0e987c6357", size = 12494348 },
{ url = "https://files.pythonhosted.org/packages/83/3e/57bb447f7bbbfaabf1712d96f9df142624a386d98fb026a761532526057e/mypy-1.15.0-cp313-cp313-win_amd64.whl", hash = "sha256:b9378e2c00146c44793c98b8d5a61039a048e31f429fb0eb546d93f4b000bedf", size = 9373648 },
{ url = "https://files.pythonhosted.org/packages/09/4e/a7d65c7322c510de2c409ff3828b03354a7c43f5a8ed458a7a131b41c7b9/mypy-1.15.0-py3-none-any.whl", hash = "sha256:5469affef548bd1895d86d3bf10ce2b44e33d86923c29e4d675b3e323437ea3e", size = 2221777 },
]
[[package]]
name = "mypy-extensions"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/98/a4/1ab47638b92648243faf97a5aeb6ea83059cc3624972ab6b8d2316078d3f/mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782", size = 4433 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/e2/5d3f6ada4297caebe1a2add3b126fe800c96f56dbe5d1988a2cbe0b267aa/mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d", size = 4695 },
]
[[package]]
name = "numpy"
version = "2.2.5"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/dc/b2/ce4b867d8cd9c0ee84938ae1e6a6f7926ebf928c9090d036fc3c6a04f946/numpy-2.2.5.tar.gz", hash = "sha256:a9c0d994680cd991b1cb772e8b297340085466a6fe964bc9d4e80f5e2f43c291", size = 20273920 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f5/fb/e4e4c254ba40e8f0c78218f9e86304628c75b6900509b601c8433bdb5da7/numpy-2.2.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c42365005c7a6c42436a54d28c43fe0e01ca11eb2ac3cefe796c25a5f98e5e9b", size = 21256475 },
{ url = "https://files.pythonhosted.org/packages/81/32/dd1f7084f5c10b2caad778258fdaeedd7fbd8afcd2510672811e6138dfac/numpy-2.2.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:498815b96f67dc347e03b719ef49c772589fb74b8ee9ea2c37feae915ad6ebda", size = 14461474 },
{ url = "https://files.pythonhosted.org/packages/0e/65/937cdf238ef6ac54ff749c0f66d9ee2b03646034c205cea9b6c51f2f3ad1/numpy-2.2.5-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:6411f744f7f20081b1b4e7112e0f4c9c5b08f94b9f086e6f0adf3645f85d3a4d", size = 5426875 },
{ url = "https://files.pythonhosted.org/packages/25/17/814515fdd545b07306eaee552b65c765035ea302d17de1b9cb50852d2452/numpy-2.2.5-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:9de6832228f617c9ef45d948ec1cd8949c482238d68b2477e6f642c33a7b0a54", size = 6969176 },
{ url = "https://files.pythonhosted.org/packages/e5/32/a66db7a5c8b5301ec329ab36d0ecca23f5e18907f43dbd593c8ec326d57c/numpy-2.2.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:369e0d4647c17c9363244f3468f2227d557a74b6781cb62ce57cf3ef5cc7c610", size = 14374850 },
{ url = "https://files.pythonhosted.org/packages/ad/c9/1bf6ada582eebcbe8978f5feb26584cd2b39f94ededeea034ca8f84af8c8/numpy-2.2.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:262d23f383170f99cd9191a7c85b9a50970fe9069b2f8ab5d786eca8a675d60b", size = 16430306 },
{ url = "https://files.pythonhosted.org/packages/6a/f0/3f741863f29e128f4fcfdb99253cc971406b402b4584663710ee07f5f7eb/numpy-2.2.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:aa70fdbdc3b169d69e8c59e65c07a1c9351ceb438e627f0fdcd471015cd956be", size = 15884767 },
{ url = "https://files.pythonhosted.org/packages/98/d9/4ccd8fd6410f7bf2d312cbc98892e0e43c2fcdd1deae293aeb0a93b18071/numpy-2.2.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37e32e985f03c06206582a7323ef926b4e78bdaa6915095ef08070471865b906", size = 18219515 },
{ url = "https://files.pythonhosted.org/packages/b1/56/783237243d4395c6dd741cf16eeb1a9035ee3d4310900e6b17e875d1b201/numpy-2.2.5-cp311-cp311-win32.whl", hash = "sha256:f5045039100ed58fa817a6227a356240ea1b9a1bc141018864c306c1a16d4175", size = 6607842 },
{ url = "https://files.pythonhosted.org/packages/98/89/0c93baaf0094bdaaaa0536fe61a27b1dce8a505fa262a865ec142208cfe9/numpy-2.2.5-cp311-cp311-win_amd64.whl", hash = "sha256:b13f04968b46ad705f7c8a80122a42ae8f620536ea38cf4bdd374302926424dd", size = 12949071 },
{ url = "https://files.pythonhosted.org/packages/e2/f7/1fd4ff108cd9d7ef929b8882692e23665dc9c23feecafbb9c6b80f4ec583/numpy-2.2.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ee461a4eaab4f165b68780a6a1af95fb23a29932be7569b9fab666c407969051", size = 20948633 },
{ url = "https://files.pythonhosted.org/packages/12/03/d443c278348371b20d830af155ff2079acad6a9e60279fac2b41dbbb73d8/numpy-2.2.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ec31367fd6a255dc8de4772bd1658c3e926d8e860a0b6e922b615e532d320ddc", size = 14176123 },
{ url = "https://files.pythonhosted.org/packages/2b/0b/5ca264641d0e7b14393313304da48b225d15d471250376f3fbdb1a2be603/numpy-2.2.5-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:47834cde750d3c9f4e52c6ca28a7361859fcaf52695c7dc3cc1a720b8922683e", size = 5163817 },
{ url = "https://files.pythonhosted.org/packages/04/b3/d522672b9e3d28e26e1613de7675b441bbd1eaca75db95680635dd158c67/numpy-2.2.5-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:2c1a1c6ccce4022383583a6ded7bbcda22fc635eb4eb1e0a053336425ed36dfa", size = 6698066 },
{ url = "https://files.pythonhosted.org/packages/a0/93/0f7a75c1ff02d4b76df35079676b3b2719fcdfb39abdf44c8b33f43ef37d/numpy-2.2.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9d75f338f5f79ee23548b03d801d28a505198297534f62416391857ea0479571", size = 14087277 },
{ url = "https://files.pythonhosted.org/packages/b0/d9/7c338b923c53d431bc837b5b787052fef9ae68a56fe91e325aac0d48226e/numpy-2.2.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a801fef99668f309b88640e28d261991bfad9617c27beda4a3aec4f217ea073", size = 16135742 },
{ url = "https://files.pythonhosted.org/packages/2d/10/4dec9184a5d74ba9867c6f7d1e9f2e0fb5fe96ff2bf50bb6f342d64f2003/numpy-2.2.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:abe38cd8381245a7f49967a6010e77dbf3680bd3627c0fe4362dd693b404c7f8", size = 15581825 },
{ url = "https://files.pythonhosted.org/packages/80/1f/2b6fcd636e848053f5b57712a7d1880b1565eec35a637fdfd0a30d5e738d/numpy-2.2.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:5a0ac90e46fdb5649ab6369d1ab6104bfe5854ab19b645bf5cda0127a13034ae", size = 17899600 },
{ url = "https://files.pythonhosted.org/packages/ec/87/36801f4dc2623d76a0a3835975524a84bd2b18fe0f8835d45c8eae2f9ff2/numpy-2.2.5-cp312-cp312-win32.whl", hash = "sha256:0cd48122a6b7eab8f06404805b1bd5856200e3ed6f8a1b9a194f9d9054631beb", size = 6312626 },
{ url = "https://files.pythonhosted.org/packages/8b/09/4ffb4d6cfe7ca6707336187951992bd8a8b9142cf345d87ab858d2d7636a/numpy-2.2.5-cp312-cp312-win_amd64.whl", hash = "sha256:ced69262a8278547e63409b2653b372bf4baff0870c57efa76c5703fd6543282", size = 12645715 },
{ url = "https://files.pythonhosted.org/packages/e2/a0/0aa7f0f4509a2e07bd7a509042967c2fab635690d4f48c6c7b3afd4f448c/numpy-2.2.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:059b51b658f4414fff78c6d7b1b4e18283ab5fa56d270ff212d5ba0c561846f4", size = 20935102 },
{ url = "https://files.pythonhosted.org/packages/7e/e4/a6a9f4537542912ec513185396fce52cdd45bdcf3e9d921ab02a93ca5aa9/numpy-2.2.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:47f9ed103af0bc63182609044b0490747e03bd20a67e391192dde119bf43d52f", size = 14191709 },
{ url = "https://files.pythonhosted.org/packages/be/65/72f3186b6050bbfe9c43cb81f9df59ae63603491d36179cf7a7c8d216758/numpy-2.2.5-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:261a1ef047751bb02f29dfe337230b5882b54521ca121fc7f62668133cb119c9", size = 5149173 },
{ url = "https://files.pythonhosted.org/packages/e5/e9/83e7a9432378dde5802651307ae5e9ea07bb72b416728202218cd4da2801/numpy-2.2.5-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:4520caa3807c1ceb005d125a75e715567806fed67e315cea619d5ec6e75a4191", size = 6684502 },
{ url = "https://files.pythonhosted.org/packages/ea/27/b80da6c762394c8ee516b74c1f686fcd16c8f23b14de57ba0cad7349d1d2/numpy-2.2.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d14b17b9be5f9c9301f43d2e2a4886a33b53f4e6fdf9ca2f4cc60aeeee76372", size = 14084417 },
{ url = "https://files.pythonhosted.org/packages/aa/fc/ebfd32c3e124e6a1043e19c0ab0769818aa69050ce5589b63d05ff185526/numpy-2.2.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ba321813a00e508d5421104464510cc962a6f791aa2fca1c97b1e65027da80d", size = 16133807 },
{ url = "https://files.pythonhosted.org/packages/bf/9b/4cc171a0acbe4666f7775cfd21d4eb6bb1d36d3a0431f48a73e9212d2278/numpy-2.2.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a4cbdef3ddf777423060c6f81b5694bad2dc9675f110c4b2a60dc0181543fac7", size = 15575611 },
{ url = "https://files.pythonhosted.org/packages/a3/45/40f4135341850df48f8edcf949cf47b523c404b712774f8855a64c96ef29/numpy-2.2.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:54088a5a147ab71a8e7fdfd8c3601972751ded0739c6b696ad9cb0343e21ab73", size = 17895747 },
{ url = "https://files.pythonhosted.org/packages/f8/4c/b32a17a46f0ffbde8cc82df6d3daeaf4f552e346df143e1b188a701a8f09/numpy-2.2.5-cp313-cp313-win32.whl", hash = "sha256:c8b82a55ef86a2d8e81b63da85e55f5537d2157165be1cb2ce7cfa57b6aef38b", size = 6309594 },
{ url = "https://files.pythonhosted.org/packages/13/ae/72e6276feb9ef06787365b05915bfdb057d01fceb4a43cb80978e518d79b/numpy-2.2.5-cp313-cp313-win_amd64.whl", hash = "sha256:d8882a829fd779f0f43998e931c466802a77ca1ee0fe25a3abe50278616b1471", size = 12638356 },
{ url = "https://files.pythonhosted.org/packages/79/56/be8b85a9f2adb688e7ded6324e20149a03541d2b3297c3ffc1a73f46dedb/numpy-2.2.5-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:e8b025c351b9f0e8b5436cf28a07fa4ac0204d67b38f01433ac7f9b870fa38c6", size = 20963778 },
{ url = "https://files.pythonhosted.org/packages/ff/77/19c5e62d55bff507a18c3cdff82e94fe174957bad25860a991cac719d3ab/numpy-2.2.5-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:8dfa94b6a4374e7851bbb6f35e6ded2120b752b063e6acdd3157e4d2bb922eba", size = 14207279 },
{ url = "https://files.pythonhosted.org/packages/75/22/aa11f22dc11ff4ffe4e849d9b63bbe8d4ac6d5fae85ddaa67dfe43be3e76/numpy-2.2.5-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:97c8425d4e26437e65e1d189d22dff4a079b747ff9c2788057bfb8114ce1e133", size = 5199247 },
{ url = "https://files.pythonhosted.org/packages/4f/6c/12d5e760fc62c08eded0394f62039f5a9857f758312bf01632a81d841459/numpy-2.2.5-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:352d330048c055ea6db701130abc48a21bec690a8d38f8284e00fab256dc1376", size = 6711087 },
{ url = "https://files.pythonhosted.org/packages/ef/94/ece8280cf4218b2bee5cec9567629e61e51b4be501e5c6840ceb593db945/numpy-2.2.5-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8b4c0773b6ada798f51f0f8e30c054d32304ccc6e9c5d93d46cb26f3d385ab19", size = 14059964 },
{ url = "https://files.pythonhosted.org/packages/39/41/c5377dac0514aaeec69115830a39d905b1882819c8e65d97fc60e177e19e/numpy-2.2.5-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:55f09e00d4dccd76b179c0f18a44f041e5332fd0e022886ba1c0bbf3ea4a18d0", size = 16121214 },
{ url = "https://files.pythonhosted.org/packages/db/54/3b9f89a943257bc8e187145c6bc0eb8e3d615655f7b14e9b490b053e8149/numpy-2.2.5-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:02f226baeefa68f7d579e213d0f3493496397d8f1cff5e2b222af274c86a552a", size = 15575788 },
{ url = "https://files.pythonhosted.org/packages/b1/c4/2e407e85df35b29f79945751b8f8e671057a13a376497d7fb2151ba0d290/numpy-2.2.5-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c26843fd58f65da9491165072da2cccc372530681de481ef670dcc8e27cfb066", size = 17893672 },
{ url = "https://files.pythonhosted.org/packages/29/7e/d0b44e129d038dba453f00d0e29ebd6eaf2f06055d72b95b9947998aca14/numpy-2.2.5-cp313-cp313t-win32.whl", hash = "sha256:1a161c2c79ab30fe4501d5a2bbfe8b162490757cf90b7f05be8b80bc02f7bb8e", size = 6377102 },
{ url = "https://files.pythonhosted.org/packages/63/be/b85e4aa4bf42c6502851b971f1c326d583fcc68227385f92089cf50a7b45/numpy-2.2.5-cp313-cp313t-win_amd64.whl", hash = "sha256:d403c84991b5ad291d3809bace5e85f4bbf44a04bdc9a88ed2bb1807b3360bb8", size = 12750096 },
]
[[package]] [[package]]
name = "outcome" name = "outcome"
version = "1.3.0.post0" version = "1.3.0.post0"
@ -194,25 +313,25 @@ wheels = [
[[package]] [[package]]
name = "packaging" name = "packaging"
version = "24.2" version = "25.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d0/63/68dbb6eb2de9cb10ee4c9c14a0148804425e13c4fb20d61cce69f53106da/packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f", size = 163950 } sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759", size = 65451 }, { url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469 },
] ]
[[package]] [[package]]
name = "pdbp" name = "pdbp"
version = "1.6.1" version = "1.7.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" }, { name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "pygments" }, { name = "pygments" },
{ name = "tabcompleter" }, { name = "tabcompleter" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/69/13/80da03638f62facbee76312ca9ee5941c017b080f2e4c6919fd4e87e16e3/pdbp-1.6.1.tar.gz", hash = "sha256:f4041642952a05df89664e166d5bd379607a0866ddd753c06874f65552bdf40b", size = 25322 } sdist = { url = "https://files.pythonhosted.org/packages/a5/7e/c2e6e6a27417ac9d23c1a8534c72f451463c71776cc182272cadaec78f6d/pdbp-1.7.0.tar.gz", hash = "sha256:d0a5b275720c451f5574427e35523aeb61c244f3faf622a80fe03019ef82d380", size = 25481 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/29/93/d56fb9ba5569dc29d8263c72e46d21a2fd38741339ebf03f54cf7561828c/pdbp-1.6.1-py3-none-any.whl", hash = "sha256:f10bad2ee044c0e5c168cb0825abfdbdc01c50013e9755df5261b060bdd35c22", size = 21495 }, { url = "https://files.pythonhosted.org/packages/86/2f/1f0144b14553ad32a8d0afa38b832c4b117694484c32aef2d939dc96f20a/pdbp-1.7.0-py3-none-any.whl", hash = "sha256:6ad99cb4e9f2fc1a5b4ef4f2e0acdb28b18b271bf71f6c9f997b652d935caa19", size = 21614 },
] ]
[[package]] [[package]]
@ -238,14 +357,29 @@ wheels = [
[[package]] [[package]]
name = "prompt-toolkit" name = "prompt-toolkit"
version = "3.0.50" version = "3.0.51"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "wcwidth" }, { name = "wcwidth" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/a1/e1/bd15cb8ffdcfeeb2bdc215de3c3cffca11408d829e4b8416dcfe71ba8854/prompt_toolkit-3.0.50.tar.gz", hash = "sha256:544748f3860a2623ca5cd6d2795e7a14f3d0e1c3c9728359013f79877fc89bab", size = 429087 } sdist = { url = "https://files.pythonhosted.org/packages/bb/6e/9d084c929dfe9e3bfe0c6a47e31f78a25c54627d64a66e884a8bf5474f1c/prompt_toolkit-3.0.51.tar.gz", hash = "sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed", size = 428940 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/e4/ea/d836f008d33151c7a1f62caf3d8dd782e4d15f6a43897f64480c2b8de2ad/prompt_toolkit-3.0.50-py3-none-any.whl", hash = "sha256:9b6427eb19e479d98acff65196a307c555eb567989e6d88ebbb1b509d9779198", size = 387816 }, { url = "https://files.pythonhosted.org/packages/ce/4f/5249960887b1fbe561d9ff265496d170b55a735b76724f10ef19f9e40716/prompt_toolkit-3.0.51-py3-none-any.whl", hash = "sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07", size = 387810 },
]
[[package]]
name = "psutil"
version = "7.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/2a/80/336820c1ad9286a4ded7e845b2eccfcb27851ab8ac6abece774a6ff4d3de/psutil-7.0.0.tar.gz", hash = "sha256:7be9c3eba38beccb6495ea33afd982a44074b78f28c434a1f51cc07fd315c456", size = 497003 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ed/e6/2d26234410f8b8abdbf891c9da62bee396583f713fb9f3325a4760875d22/psutil-7.0.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:101d71dc322e3cffd7cea0650b09b3d08b8e7c4109dd6809fe452dfd00e58b25", size = 238051 },
{ url = "https://files.pythonhosted.org/packages/04/8b/30f930733afe425e3cbfc0e1468a30a18942350c1a8816acfade80c005c4/psutil-7.0.0-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:39db632f6bb862eeccf56660871433e111b6ea58f2caea825571951d4b6aa3da", size = 239535 },
{ url = "https://files.pythonhosted.org/packages/2a/ed/d362e84620dd22876b55389248e522338ed1bf134a5edd3b8231d7207f6d/psutil-7.0.0-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fcee592b4c6f146991ca55919ea3d1f8926497a713ed7faaf8225e174581e91", size = 275004 },
{ url = "https://files.pythonhosted.org/packages/bf/b9/b0eb3f3cbcb734d930fdf839431606844a825b23eaf9a6ab371edac8162c/psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b1388a4f6875d7e2aff5c4ca1cc16c545ed41dd8bb596cefea80111db353a34", size = 277986 },
{ url = "https://files.pythonhosted.org/packages/eb/a2/709e0fe2f093556c17fbafda93ac032257242cabcc7ff3369e2cb76a97aa/psutil-7.0.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5f098451abc2828f7dc6b58d44b532b22f2088f4999a937557b603ce72b1993", size = 279544 },
{ url = "https://files.pythonhosted.org/packages/50/e6/eecf58810b9d12e6427369784efe814a1eec0f492084ce8eb8f4d89d6d61/psutil-7.0.0-cp37-abi3-win32.whl", hash = "sha256:ba3fcef7523064a6c9da440fc4d6bd07da93ac726b5733c29027d7dc95b39d99", size = 241053 },
{ url = "https://files.pythonhosted.org/packages/50/1b/6921afe68c74868b4c9fa424dad3be35b095e16687989ebbb50ce4fceb7c/psutil-7.0.0-cp37-abi3-win_amd64.whl", hash = "sha256:4cf3d4eb1aa9b348dec30105c55cd9b7d4629285735a102beb4441e38db90553", size = 244885 },
] ]
[[package]] [[package]]
@ -349,6 +483,7 @@ name = "tractor"
version = "0.1.0a6.dev0" version = "0.1.0a6.dev0"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "bidict" },
{ name = "cffi" }, { name = "cffi" },
{ name = "colorlog" }, { name = "colorlog" },
{ name = "msgspec" }, { name = "msgspec" },
@ -361,16 +496,21 @@ dependencies = [
[package.dev-dependencies] [package.dev-dependencies]
dev = [ dev = [
{ name = "greenback" }, { name = "greenback" },
{ name = "mypy" },
{ name = "numpy" },
{ name = "pexpect" }, { name = "pexpect" },
{ name = "prompt-toolkit" }, { name = "prompt-toolkit" },
{ name = "psutil" },
{ name = "pyperclip" }, { name = "pyperclip" },
{ name = "pytest" }, { name = "pytest" },
{ name = "stackscope" }, { name = "stackscope" },
{ name = "trio-typing" },
{ name = "xonsh" }, { name = "xonsh" },
] ]
[package.metadata] [package.metadata]
requires-dist = [ requires-dist = [
{ name = "bidict", specifier = ">=0.23.1" },
{ name = "cffi", specifier = ">=1.17.1" }, { name = "cffi", specifier = ">=1.17.1" },
{ name = "colorlog", specifier = ">=6.8.2,<7" }, { name = "colorlog", specifier = ">=6.8.2,<7" },
{ name = "msgspec", specifier = ">=0.19.0" }, { name = "msgspec", specifier = ">=0.19.0" },
@ -383,11 +523,15 @@ requires-dist = [
[package.metadata.requires-dev] [package.metadata.requires-dev]
dev = [ dev = [
{ name = "greenback", specifier = ">=1.2.1,<2" }, { name = "greenback", specifier = ">=1.2.1,<2" },
{ name = "mypy", specifier = ">=1.15.0" },
{ name = "numpy", specifier = ">=2.2.4" },
{ name = "pexpect", specifier = ">=4.9.0,<5" }, { name = "pexpect", specifier = ">=4.9.0,<5" },
{ name = "prompt-toolkit", specifier = ">=3.0.50" }, { name = "prompt-toolkit", specifier = ">=3.0.50" },
{ name = "psutil", specifier = ">=7.0.0" },
{ name = "pyperclip", specifier = ">=1.9.0" }, { name = "pyperclip", specifier = ">=1.9.0" },
{ name = "pytest", specifier = ">=8.3.5" }, { name = "pytest", specifier = ">=8.3.5" },
{ name = "stackscope", specifier = ">=0.2.2,<0.3" }, { name = "stackscope", specifier = ">=0.2.2,<0.3" },
{ name = "trio-typing", specifier = ">=0.10.0" },
{ name = "xonsh", specifier = ">=0.19.2" }, { name = "xonsh", specifier = ">=0.19.2" },
] ]
@ -405,7 +549,7 @@ wheels = [
[[package]] [[package]]
name = "trio" name = "trio"
version = "0.29.0" version = "0.30.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "attrs" }, { name = "attrs" },
@ -415,9 +559,35 @@ dependencies = [
{ name = "sniffio" }, { name = "sniffio" },
{ name = "sortedcontainers" }, { name = "sortedcontainers" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/a1/47/f62e62a1a6f37909aed0bf8f5d5411e06fa03846cfcb64540cd1180ccc9f/trio-0.29.0.tar.gz", hash = "sha256:ea0d3967159fc130acb6939a0be0e558e364fee26b5deeecc893a6b08c361bdf", size = 588952 } sdist = { url = "https://files.pythonhosted.org/packages/01/c1/68d582b4d3a1c1f8118e18042464bb12a7c1b75d64d75111b297687041e3/trio-0.30.0.tar.gz", hash = "sha256:0781c857c0c81f8f51e0089929a26b5bb63d57f927728a5586f7e36171f064df", size = 593776 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/c9/55/c4d9bea8b3d7937901958f65124123512419ab0eb73695e5f382521abbfb/trio-0.29.0-py3-none-any.whl", hash = "sha256:d8c463f1a9cc776ff63e331aba44c125f423a5a13c684307e828d930e625ba66", size = 492920 }, { url = "https://files.pythonhosted.org/packages/69/8e/3f6dfda475ecd940e786defe6df6c500734e686c9cd0a0f8ef6821e9b2f2/trio-0.30.0-py3-none-any.whl", hash = "sha256:3bf4f06b8decf8d3cf00af85f40a89824669e2d033bb32469d34840edcfc22a5", size = 499194 },
]
[[package]]
name = "trio-typing"
version = "0.10.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "async-generator" },
{ name = "importlib-metadata" },
{ name = "mypy-extensions" },
{ name = "packaging" },
{ name = "trio" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b5/74/a87aafa40ec3a37089148b859892cbe2eef08d132c816d58a60459be5337/trio-typing-0.10.0.tar.gz", hash = "sha256:065ee684296d52a8ab0e2374666301aec36ee5747ac0e7a61f230250f8907ac3", size = 38747 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/89/ff/9bd795273eb14fac7f6a59d16cc8c4d0948a619a1193d375437c7f50f3eb/trio_typing-0.10.0-py3-none-any.whl", hash = "sha256:6d0e7ec9d837a2fe03591031a172533fbf4a1a95baf369edebfc51d5a49f0264", size = 42224 },
]
[[package]]
name = "typing-extensions"
version = "4.13.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f6/37/23083fcd6e35492953e8d2aaaa68b860eb422b34627b13f2ce3eb6106061/typing_extensions-4.13.2.tar.gz", hash = "sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef", size = 106967 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8b/54/b1ae86c0973cc6f0210b53d508ca3641fb6d0c56823f288d108bc7ab3cc8/typing_extensions-4.13.2-py3-none-any.whl", hash = "sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c", size = 45806 },
] ]
[[package]] [[package]]
@ -484,13 +654,22 @@ wheels = [
[[package]] [[package]]
name = "xonsh" name = "xonsh"
version = "0.19.2" version = "0.19.3"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/68/4e/56e95a5e607eb3b0da37396f87cde70588efc8ef819ab16f02d5b8378dc4/xonsh-0.19.2.tar.gz", hash = "sha256:cfdd0680d954a2c3aefd6caddcc7143a3d06aa417ed18365a08219bb71b960b0", size = 799960 } sdist = { url = "https://files.pythonhosted.org/packages/4a/5a/7d28dffedef266b3cbde5c0ba63f7f861bd5ff5c35bfa80df269f61000b4/xonsh-0.19.3.tar.gz", hash = "sha256:f3a58752b12f02bf2b17b91e88a83615115bb4883032cf8ef36e451964f29e90", size = 801379 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/6c/13/281094759df87b23b3c02dc4a16603ab08ea54d7f6acfeb69f3341137c7a/xonsh-0.19.2-py310-none-any.whl", hash = "sha256:ec7f163fd3a4943782aa34069d4e72793328c916a5975949dbec8536cbfc089b", size = 642301 }, { url = "https://files.pythonhosted.org/packages/fc/66/06310078bec654c792d8f3912c330efe8dbda13867916f4922b6035f3287/xonsh-0.19.3-py310-none-any.whl", hash = "sha256:e0cd36b5a9765aa6f0e5365ac349fd3cbd452cc932d92c754de323dab2a8589a", size = 642609 },
{ url = "https://files.pythonhosted.org/packages/29/41/a51e4c3918fe9a293b150cb949b1b8c6d45eb17dfed480dcb76ea43df4e7/xonsh-0.19.2-py311-none-any.whl", hash = "sha256:53c45f7a767901f2f518f9b8dd60fc653e0498e56e89825e1710bb0859985049", size = 642286 }, { url = "https://files.pythonhosted.org/packages/20/c6/f4924f231a0fdc74f9382ed3e58b2fe6d25c24e3861dde0d30ebec3beecb/xonsh-0.19.3-py311-none-any.whl", hash = "sha256:319f03034a4838041d2326785c1fde3a45c709e825451aa4ff01b803ca452856", size = 642576 },
{ url = "https://files.pythonhosted.org/packages/0a/93/9a77b731f492fac27c577dea2afb5a2bcc2a6a1c79be0c86c95498060270/xonsh-0.19.2-py312-none-any.whl", hash = "sha256:b24c619aa52b59eae4d35c4195dba9b19a2c548fb5c42c6f85f2b8ccb96807b5", size = 642386 }, { url = "https://files.pythonhosted.org/packages/82/52/a9de7c31546fc236950aabe22205105eeec8cf30655a522ba9f9397d9352/xonsh-0.19.3-py312-none-any.whl", hash = "sha256:6339c72f3a36cf8022fc6daffb9b97571d3a32f31ef9ff0a41b1d5185724e8d7", size = 642587 },
{ url = "https://files.pythonhosted.org/packages/be/75/070324769c1ff88d971ce040f4f486339be98e0a365c8dd9991eb654265b/xonsh-0.19.2-py313-none-any.whl", hash = "sha256:c53ef6c19f781fbc399ed1b382b5c2aac2125010679a3b61d643978273c27df0", size = 642873 }, { url = "https://files.pythonhosted.org/packages/8b/60/bc91e414c75d902816356ec5103adc1fa1672038085b40275a291e149945/xonsh-0.19.3-py313-none-any.whl", hash = "sha256:1b1ca8fee195aab4bef36948aaf7580c2230580b5c0dd7c34a335fb84023efc4", size = 643111 },
{ url = "https://files.pythonhosted.org/packages/fa/cb/2c7ccec54f5b0e73fdf7650e8336582ff0347d9001c5ef8271dc00c034fe/xonsh-0.19.2-py39-none-any.whl", hash = "sha256:bcc0225dc3847f1ed2f175dac6122fbcc54cea67d9c2dc2753d9615e2a5ff284", size = 634602 }, { url = "https://files.pythonhosted.org/packages/b8/b4/7bbf0096e909d332e2e81d0024660dfca69017c56ce43115098e841e1454/xonsh-0.19.3-py39-none-any.whl", hash = "sha256:80e3313fb375d49f0eef2f86375224b568b3cbdd019f63a6bc037117aac1704e", size = 634814 },
]
[[package]]
name = "zipp"
version = "3.21.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/3f/50/bad581df71744867e9468ebd0bcd6505de3b275e06f202c2cb016e3ff56f/zipp-3.21.0.tar.gz", hash = "sha256:2c9958f6430a2040341a52eb608ed6dd93ef4392e02ffe219417c1b28b5dd1f4", size = 24545 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/1a/7e4798e9339adc931158c9d69ecc34f5e6791489d469f5e50ec15e35f458/zipp-3.21.0-py3-none-any.whl", hash = "sha256:ac1bbe05fd2991f160ebce24ffbac5f6d11d83dc90891255885223d42b3cd931", size = 9630 },
] ]