|
|
|
@ -4,6 +4,176 @@ Changelog
|
|
|
|
|
|
|
|
|
|
.. towncrier release notes start
|
|
|
|
|
|
|
|
|
|
tractor 0.1.0a4 (2021-12-17)
|
|
|
|
|
============================
|
|
|
|
|
|
|
|
|
|
Features
|
|
|
|
|
--------
|
|
|
|
|
|
|
|
|
|
- `#121 <https://github.com/pytest-dev/pluggy/issues/121>`_: Add
|
|
|
|
|
"infected ``asyncio`` mode; a sub-system to spawn and control
|
|
|
|
|
``asyncio`` actors using ``trio``'s guest-mode.
|
|
|
|
|
|
|
|
|
|
- `#121 <https://github.com/pytest-dev/pluggy/issues/121>`_: Add
|
|
|
|
|
"infected ``asyncio`` mode; a sub-system to spawn and control
|
|
|
|
|
``asyncio`` actors using ``trio``'s guest-mode.
|
|
|
|
|
|
|
|
|
|
This gets us the following very interesting functionality:
|
|
|
|
|
|
|
|
|
|
- ability to spawn an actor that has a process entry point of
|
|
|
|
|
``asyncio.run()`` by passing ``infect_asyncio=True`` to
|
|
|
|
|
``Portal.start_actor()`` (and friends).
|
|
|
|
|
- the ``asyncio`` actor embeds ``trio`` using guest-mode and starts
|
|
|
|
|
a main ``trio`` task which runs the ``tractor.Actor._async_main()``
|
|
|
|
|
entry point engages all the normal ``tractor`` runtime IPC/messaging
|
|
|
|
|
machinery; for all purposes the actor is now running normally on
|
|
|
|
|
a ``trio.run()``.
|
|
|
|
|
- the actor can now make one-to-one task spawning requests to the
|
|
|
|
|
underlying ``asyncio`` event loop using either of:
|
|
|
|
|
* ``to_asyncio.run_task()`` to spawn and run an ``asyncio`` task to
|
|
|
|
|
completion and block until a return value is delivered.
|
|
|
|
|
* ``async with to_asyncio.open_channel_from():`` which spawns a task
|
|
|
|
|
and hands it a pair of "memory channels" to allow for bi-directional
|
|
|
|
|
streaming between the now SC-linked ``trio`` and ``asyncio`` tasks.
|
|
|
|
|
|
|
|
|
|
The output from any call(s) to ``asyncio`` can be handled as normal in
|
|
|
|
|
``trio``/``tractor`` task operation with the caveat of the overhead due
|
|
|
|
|
to guest-mode use.
|
|
|
|
|
|
|
|
|
|
For more details see the `original PR
|
|
|
|
|
<https://github.com/goodboy/tractor/pull/121>`_ and `issue
|
|
|
|
|
<https://github.com/goodboy/tractor/issues/120>`_.
|
|
|
|
|
|
|
|
|
|
- `#257 <https://github.com/pytest-dev/pluggy/issues/257>`_: Add
|
|
|
|
|
``trionics.maybe_open_context()`` an actor-scoped async multi-task
|
|
|
|
|
context manager resource caching API.
|
|
|
|
|
|
|
|
|
|
Adds an SC-safe cacheing async context manager api that only enters on
|
|
|
|
|
the *first* task entry and only exits on the *last* task exit while in
|
|
|
|
|
between delivering the same cached value per input key. Keys can be
|
|
|
|
|
either an explicit ``key`` named arg provided by the user or a
|
|
|
|
|
hashable ``kwargs`` dict (will be converted to a ``list[tuple]``) which
|
|
|
|
|
is passed to the underlying manager function as input.
|
|
|
|
|
|
|
|
|
|
- `#261 <https://github.com/pytest-dev/pluggy/issues/261>`_: Add
|
|
|
|
|
cross-actor-task ``Context`` oriented error relay, a new stream
|
|
|
|
|
overrun error-signal ``StreamOverrun``, and support disabling
|
|
|
|
|
``MsgStream`` backpressure as the default before a stream is opened or
|
|
|
|
|
by choice of the user.
|
|
|
|
|
|
|
|
|
|
We added stricter semantics around ``tractor.Context.open_stream():``
|
|
|
|
|
particularly to do with streams which are only opened at one end.
|
|
|
|
|
Previously, if only one end opened a stream there was no way for that
|
|
|
|
|
sender to know if msgs are being received until first, the feeder mem
|
|
|
|
|
chan on the receiver side hit a backpressure state and then that
|
|
|
|
|
condition delayed its msg loop processing task to eventually create
|
|
|
|
|
backpressure on the associated IPC transport. This is non-ideal in the
|
|
|
|
|
case where the receiver side never opened a stream by mistake since it
|
|
|
|
|
results in silent block of the sender and no adherence to the underlying
|
|
|
|
|
mem chan buffer size settings (which is still unsolved btw).
|
|
|
|
|
|
|
|
|
|
To solve this we add non-backpressure style message pushing inside
|
|
|
|
|
``Actor._push_result()`` by default and only use the backpressure
|
|
|
|
|
``trio.MemorySendChannel.send()`` call **iff** the local end of the
|
|
|
|
|
context has entered ``Context.open_stream():``. This way if the stream
|
|
|
|
|
was never opened but the mem chan is overrun, we relay back to the
|
|
|
|
|
sender a (new exception) ``SteamOverrun`` error which is raised in the
|
|
|
|
|
sender's scope with a special error message about the stream never
|
|
|
|
|
having been opened. Further, this behaviour (non-backpressure style
|
|
|
|
|
where senders can expect an error on overruns) can now be enabled with
|
|
|
|
|
``.open_stream(backpressure=False)`` and the underlying mem chan size
|
|
|
|
|
can be specified with a kwarg ``msg_buffer_size: int``.
|
|
|
|
|
|
|
|
|
|
Further bug fixes and enhancements in this changeset include:
|
|
|
|
|
- fix a race we were ignoring where if the callee task opened a context
|
|
|
|
|
it could enter ``Context.open_stream()`` before calling
|
|
|
|
|
``.started()``.
|
|
|
|
|
- Disallow calling ``Context.started()`` more then once.
|
|
|
|
|
- Enable ``Context`` linked tasks error relaying via the new
|
|
|
|
|
``Context._maybe_raise_from_remote_msg()`` which (for now) uses
|
|
|
|
|
a simple ``trio.Nursery.start_soon()`` to raise the error via closure
|
|
|
|
|
in the local scope.
|
|
|
|
|
|
|
|
|
|
- `#267 <https://github.com/pytest-dev/pluggy/issues/267>`_: This
|
|
|
|
|
(finally) adds fully acknowledged remote cancellation messaging
|
|
|
|
|
support for both explicit ``Portal.cancel_actor()`` calls as well as
|
|
|
|
|
when there is a "runtime-wide" cancellations (eg. during KBI or
|
|
|
|
|
general actor nursery exception handling which causes a full actor
|
|
|
|
|
"crash"/termination).
|
|
|
|
|
|
|
|
|
|
You can think of this as the most ideal case in 2-generals where the
|
|
|
|
|
actor requesting the cancel of its child is able to always receive back
|
|
|
|
|
the ACK to that request. This leads to a more deterministic shutdown of
|
|
|
|
|
the child where the parent is able to wait for the child to fully
|
|
|
|
|
respond to the request. On a localhost setup, where the parent can
|
|
|
|
|
monitor the state of the child through process or other OS APIs instead
|
|
|
|
|
of solely through IPC messaging, the parent can know whether or not the
|
|
|
|
|
child decided to cancel with more certainty. In the case of separate
|
|
|
|
|
hosts, we still rely on a simple timeout approach until such a time
|
|
|
|
|
where we prefer to get "fancier".
|
|
|
|
|
|
|
|
|
|
- `#271 <https://github.com/pytest-dev/pluggy/issues/271>`_: Add a per
|
|
|
|
|
actor ``debug_mode: bool`` control to our nursery.
|
|
|
|
|
|
|
|
|
|
This allows spawning actors via ``ActorNursery.start_actor()`` (and
|
|
|
|
|
other dependent methods) with a ``debug_mode=True`` flag much like
|
|
|
|
|
``tractor.open_nursery():`` such that per process crash handling
|
|
|
|
|
can be toggled for cases where a user does not need/want all child actors
|
|
|
|
|
to drop into the debugger on error. This is often useful when you have
|
|
|
|
|
actor-tasks which are expected to error often (and be re-run) but want
|
|
|
|
|
to specifically interact with some (problematic) child.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Bugfixes
|
|
|
|
|
--------
|
|
|
|
|
|
|
|
|
|
- `#239 <https://github.com/pytest-dev/pluggy/issues/239>`_: Fix
|
|
|
|
|
keyboard interrupt handling in ``Portal.open_context()`` blocks.
|
|
|
|
|
|
|
|
|
|
Previously this not triggering cancellation of the remote task context
|
|
|
|
|
and could result in hangs if a stream was also opened. This fix is to
|
|
|
|
|
accept `BaseException` since it is likely any other top level exception
|
|
|
|
|
other then kbi (even though not expected) should also get this result.
|
|
|
|
|
|
|
|
|
|
- `#264 <https://github.com/pytest-dev/pluggy/issues/264>`_: Fix
|
|
|
|
|
``Portal.run_in_actor()`` returns ``None`` result.
|
|
|
|
|
|
|
|
|
|
``None`` was being used as the cached result flag and obviously breaks
|
|
|
|
|
on a ``None`` returned from the remote target task. This would cause an
|
|
|
|
|
infinite hang if user code ever called ``Portal.result()`` *before* the
|
|
|
|
|
nursery exit. The simple fix is to use the *return message* as the
|
|
|
|
|
initial "no-result-received-yet" flag value and, once received, the
|
|
|
|
|
return value is read from the message to avoid the cache logic error.
|
|
|
|
|
|
|
|
|
|
- `#266 <https://github.com/pytest-dev/pluggy/issues/266>`_: Fix
|
|
|
|
|
graceful cancellation of daemon actors
|
|
|
|
|
|
|
|
|
|
Previously, his was a bug where if the soft wait on a sub-process (the
|
|
|
|
|
``await .proc.wait()``) in the reaper task teardown was cancelled we
|
|
|
|
|
would fail over to the hard reaping sequence (meant for culling off any
|
|
|
|
|
potential zombies via system kill signals). The hard reap has a timeout
|
|
|
|
|
of 3s (currently though in theory we could make it shorter?) before
|
|
|
|
|
system signalling kicks in. This means that any daemon actor still
|
|
|
|
|
running during nursery exit would get hard reaped (3s later) instead of
|
|
|
|
|
cancelled via IPC message. Now we catch the ``trio.Cancelled``, call
|
|
|
|
|
``Portal.cancel_actor()`` on the daemon and expect the child to
|
|
|
|
|
self-terminate after the runtime cancels and shuts down the process.
|
|
|
|
|
|
|
|
|
|
- `#278 <https://github.com/pytest-dev/pluggy/issues/278>`_: Repair
|
|
|
|
|
inter-actor stream closure semantics to work correctly with
|
|
|
|
|
``tractor.trionics.BroadcastReceiver`` task fan out usage.
|
|
|
|
|
|
|
|
|
|
A set of previously unknown bugs discovered in `257
|
|
|
|
|
<https://github.com/goodboy/tractor/pull/257>`_ let graceful stream
|
|
|
|
|
closure result in hanging consumer tasks that use the broadcast APIs.
|
|
|
|
|
This adds better internal closure state tracking to the broadcast
|
|
|
|
|
receiver and message stream APIs and in particular ensures that when an
|
|
|
|
|
underlying stream/receive-channel (a broadcast receiver is receiving
|
|
|
|
|
from) is closed, all consumer tasks waiting on that underlying channel
|
|
|
|
|
are woken so they can receive the ``trio.EndOfChannel`` signal and
|
|
|
|
|
promptly terminate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
tractor 0.1.0a3 (2021-11-02)
|
|
|
|
|
============================
|
|
|
|
|
|
|
|
|
@ -55,6 +225,13 @@ Features
|
|
|
|
|
for the time being. (#248)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Misc
|
|
|
|
|
----
|
|
|
|
|
|
|
|
|
|
- #243 add a discint ``'CANCEL'`` log level to allow the runtime to
|
|
|
|
|
emit details about cancellation machinery statuses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
tractor 0.1.0a2 (2021-09-07)
|
|
|
|
|
============================
|
|
|
|
|
|
|
|
|
|