forked from goodboy/tractor
				
			Merge pull request #291 from goodboy/drop_old_nooz_files
Drop old fragments that `towncrier` somehow missedmoar_timeoutz
						commit
						d27bdbd40e
					
				| 
						 | 
				
			
			@ -1,28 +0,0 @@
 | 
			
		|||
Add "infected ``asyncio`` mode; a sub-system to spawn and control
 | 
			
		||||
``asyncio`` actors using ``trio``'s guest-mode.
 | 
			
		||||
 | 
			
		||||
This gets us the following very interesting functionality:
 | 
			
		||||
 | 
			
		||||
- ability to spawn an actor that has a process entry point of
 | 
			
		||||
  ``asyncio.run()`` by passing ``infect_asyncio=True`` to
 | 
			
		||||
  ``Portal.start_actor()`` (and friends).
 | 
			
		||||
- the ``asyncio`` actor embeds ``trio`` using guest-mode and starts
 | 
			
		||||
  a main ``trio`` task which runs the ``tractor.Actor._async_main()``
 | 
			
		||||
  entry point engages all the normal ``tractor`` runtime IPC/messaging
 | 
			
		||||
  machinery; for all purposes the actor is now running normally on
 | 
			
		||||
  a ``trio.run()``.
 | 
			
		||||
- the actor can now make one-to-one task spawning requests to the
 | 
			
		||||
  underlying ``asyncio`` event loop using either of:
 | 
			
		||||
  * ``to_asyncio.run_task()`` to spawn and run an ``asyncio`` task to
 | 
			
		||||
    completion and block until a return value is delivered.
 | 
			
		||||
  * ``async with to_asyncio.open_channel_from():`` which spawns a task
 | 
			
		||||
    and hands it a pair of "memory channels" to allow for bi-directional
 | 
			
		||||
    streaming between the now SC-linked ``trio`` and ``asyncio`` tasks.
 | 
			
		||||
 | 
			
		||||
The output from any call(s) to ``asyncio`` can be handled as normal in
 | 
			
		||||
``trio``/``tractor`` task operation with the caveat of the overhead due
 | 
			
		||||
to guest-mode use.
 | 
			
		||||
 | 
			
		||||
For more details see the `original PR
 | 
			
		||||
<https://github.com/goodboy/tractor/pull/121>`_ and `issue
 | 
			
		||||
<https://github.com/goodboy/tractor/issues/120>`_.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,6 +0,0 @@
 | 
			
		|||
Fix keyboard interrupt handling in ``Portal.open_context()`` blocks.
 | 
			
		||||
 | 
			
		||||
Previously this not triggering cancellation of the remote task context
 | 
			
		||||
and could result in hangs if a stream was also opened. This fix is to
 | 
			
		||||
accept `BaseException` since it is likely any other top level exception
 | 
			
		||||
other then kbi (even though not expected) should also get this result.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,9 +0,0 @@
 | 
			
		|||
Add a custom 'CANCEL' log level and use through runtime.
 | 
			
		||||
 | 
			
		||||
In order to reduce log messages and also start toying with the idea of
 | 
			
		||||
"application layer" oriented tracing, we added this new level just above
 | 
			
		||||
'runtime' but just below 'info'. It is intended to be used solely for
 | 
			
		||||
cancellation and teardown related messages. Included are some small
 | 
			
		||||
overrides to the stdlib's ``logging.LoggerAdapter`` to passthrough the
 | 
			
		||||
correct stack frame to show when one of the custom level methods is
 | 
			
		||||
used.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,9 +0,0 @@
 | 
			
		|||
Add a custom 'CANCEL' log level and use through runtime.
 | 
			
		||||
 | 
			
		||||
In order to reduce log messages and also start toying with the idea of
 | 
			
		||||
"application layer" oriented tracing, we added this new level just above
 | 
			
		||||
'runtime' but just below 'info'. It is intended to be used solely for
 | 
			
		||||
cancellation and teardown related messages. Included are some small
 | 
			
		||||
overrides to the stdlib's ``logging.LoggerAdapter`` to passthrough the
 | 
			
		||||
correct stack frame to show when one of the custom level methods is
 | 
			
		||||
used.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,9 +0,0 @@
 | 
			
		|||
Add ``trionics.maybe_open_context()`` an actor-scoped async multi-task
 | 
			
		||||
context manager resource caching API.
 | 
			
		||||
 | 
			
		||||
Adds an SC-safe cacheing async context manager api that only enters on
 | 
			
		||||
the *first* task entry and only exits on the *last* task exit while in
 | 
			
		||||
between delivering the same cached value per input key. Keys can be
 | 
			
		||||
either an explicit ``key`` named arg provided by the user or a
 | 
			
		||||
hashable ``kwargs`` dict (will be converted to a ``list[tuple]``) which
 | 
			
		||||
is passed to the underlying manager function as input.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,8 +0,0 @@
 | 
			
		|||
Fix ``Portal.run_in_actor()`` returns ``None`` result.
 | 
			
		||||
 | 
			
		||||
``None`` was being used as the cached result flag and obviously breaks
 | 
			
		||||
on a ``None`` returned from the remote target task. This would cause an
 | 
			
		||||
infinite hang if user code ever called ``Portal.result()`` *before* the
 | 
			
		||||
nursery exit. The simple fix is to use the *return message* as the
 | 
			
		||||
initial "no-result-received-yet" flag value and, once received, the
 | 
			
		||||
return value is read from the message to avoid the cache logic error.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,12 +0,0 @@
 | 
			
		|||
Fix graceful cancellation of daemon actors
 | 
			
		||||
 | 
			
		||||
Previously, his was a bug where if the soft wait on a sub-process (the
 | 
			
		||||
``await .proc.wait()``) in the reaper task teardown was cancelled we
 | 
			
		||||
would fail over to the hard reaping sequence (meant for culling off any
 | 
			
		||||
potential zombies via system kill signals). The hard reap has a timeout
 | 
			
		||||
of 3s (currently though in theory we could make it shorter?) before
 | 
			
		||||
system signalling kicks in. This means that any daemon actor still
 | 
			
		||||
running during nursery exit would get hard reaped (3s later) instead of
 | 
			
		||||
cancelled via IPC message. Now we catch the ``trio.Cancelled``, call
 | 
			
		||||
``Portal.cancel_actor()`` on the daemon and expect the child to
 | 
			
		||||
self-terminate after the runtime cancels and shuts down the process.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,9 +0,0 @@
 | 
			
		|||
Add a per actor ``debug_mode: bool`` control to our nursery.
 | 
			
		||||
 | 
			
		||||
This allows spawning actors via ``ActorNursery.start_actor()`` (and
 | 
			
		||||
other dependent methods) with a ``debug_mode=True`` flag much like
 | 
			
		||||
``tractor.open_nursery():`` such that per process crash handling
 | 
			
		||||
can be toggled for cases where a user does not need/want all child actors
 | 
			
		||||
to drop into the debugger on error. This is often useful when you have
 | 
			
		||||
actor-tasks which are expected to error often (and be re-run) but want
 | 
			
		||||
to specifically interact with some (problematic) child.
 | 
			
		||||
| 
						 | 
				
			
			@ -1,12 +0,0 @@
 | 
			
		|||
Repair inter-actor stream closure semantics to work correctly with
 | 
			
		||||
``tractor.trionics.BroadcastReceiver`` task fan out usage.
 | 
			
		||||
 | 
			
		||||
A set of previously unknown bugs discovered in `257
 | 
			
		||||
<https://github.com/goodboy/tractor/pull/257>`_ let graceful stream
 | 
			
		||||
closure result in hanging consumer tasks that use the broadcast APIs.
 | 
			
		||||
This adds better internal closure state tracking to the broadcast
 | 
			
		||||
receiver and message stream APIs and in particular ensures that when an
 | 
			
		||||
underlying stream/receive-channel (a broadcast receiver is receiving
 | 
			
		||||
from) is closed, all consumer tasks waiting on that underlying channel
 | 
			
		||||
are woken so they can receive the ``trio.EndOfChannel`` signal and
 | 
			
		||||
promptly terminate.
 | 
			
		||||
		Loading…
	
		Reference in New Issue