5.9 KiB
Changelog
tractor 0.1.0a2 (2021-09-07)
Features
Add tokio-style broadcast channels as a solution for #204 and discussed thoroughly in trio/#987.
This gives us local task broadcast functionality using a new
BroadcastReceiver
type which can wraptrio.ReceiveChannel
and provide fan-out copies of a stream of data to every subscribed consumer. We use this new machinery to provide aReceiveMsgStream.subscribe()
async context manager which can be used by actor-local concumers tasks to easily pull from a shared and dynamic IPC stream. (#229)
Bugfixes
Handle broken channel/stream faults where the root's tty lock is left acquired by some child actor who went MIA and the root ends up hanging indefinitely. (#234)
There's two parts here: we no longer shield wait on the lock and, now always do our best to release the lock on the expected worst case connection faults.
Deprecations and Removals
Drop stream "shielding" support which was originally added to sidestep a cancelled call to
.receive()
In the original api design a stream instance was returned directly from a call to
Portal.run()
and thus there was no "exit phase" to handle cancellations and errors which would trigger implicit closure. Now that we have said enter/exit semantics withPortal.open_stream_from()
andContext.open_stream()
we can drop this implicit (and arguably confusing) behavior. (#230)Drop Python 3.7 support in preparation for supporting 3.9+ syntax. (#232)
tractor 0.1.0a1 (2021-08-01)
Features
- Updated our uni-directional streaming API (#206) to require a context manager style
async with Portal.open_stream_from(target) as stream:
which explicitly determines when to stop a stream in the calling (aka portal opening) actor much likeasync_generator.aclosing()
enforcement. - Improved the
multiprocessing
backend sub-actor reaping (#208) during actor nursery exit, particularly during cancellation scenarios that previously might result in hard to debug hangs. - Added initial bi-directional streaming support in #219 with follow up debugger improvements via #220 using the new
tractor.Context
cross-actor task syncing system. The debugger upgrades add an edge triggered last-in-tty-lock semaphore which allows the root process for a tree to avoid clobbering children who have queued to acquire thepdb
repl by waiting to cancel sub-actors until the lock is known to be released and has no pending waiters.
Experiments and WIPs
- Initial optional
msgspec
serialization support in #214 which should hopefully land by next release. - Improved "infect
asyncio
" cross-loop task cancellation and error propagation by vastly simplifying the cross-loop-task streaming approach. We may end up just going with a use ofanyio
in the medium term to avoid re-doing work done by their cross-event-loop portals. See theinfect_asyncio
for details.
Improved Documentation
- Updated our readme to include more (and better) examples (with matching multi-terminal process monitoring shell commands) as well as added many more examples to the repo set.
- Added a readme "actors under the hood" section in an effort to guard against suggestions for changing the API away from
trio
's tasks-as-functions style. - Moved to using the sphinx book theme though it needs some heavy tweaking and doesn't seem to show our logo on rtd :(
Trivial/Internal Changes
- Added a new
TransportClosed
internal exception/signal (#215 for catching TCP channel gentle closes instead of silently falling through the message handler loop via an async generatorreturn
.
Deprecations and Removals
- Dropped support for invoking sync functions (#205) in other actors/processes since you can always wrap a sync function from an async one. Users can instead consider using
trio-parallel
which is a project specifically geared for purely synchronous calls in sub-processes. - Deprecated our
tractor.run()
entrypoint #197; the runtime is now either started implicitly in first actor nursery use or via an explicit call totractor.open_root_actor()
. Full removal oftractor.run()
will come by beta release.
tractor 0.1.0a0 (2021-02-28)
Summary
trio
based process spawner (usingsubprocess
)- initial multi-process debugging with
pdb++
- windows support using both
trio
andmultiprocessing
spawners - "portal" api for cross-process, structured concurrent, (streaming) IPC