forked from goodboy/tractor
d280c26f15
There's no point in sending a cancel message to the remote linked task and especially no reason to block waiting on a result from that task if the transport layer is detected to be disconnected. We expect that the transport shouldn't go down at the layer of the message loop (reconnection logic should be handled in the transport layer itself) so if we detect the channel is not connected we don't bother requesting cancels nor waiting on a final result message. Why? - if the connection goes down in error the caller side won't have a way to know "how long" it should block to wait for a cancel ack or result and causes a potential hang that may require an additional ctrl-c from the user especially if using the debugger or if the traceback is not seen on console. - obviously there's no point in waiting for messages when there's no transport to deliver them XD Further, add some more detailed cancel logging detailing the task and actor ids. |
||
---|---|---|
.. | ||
experimental | ||
testing | ||
trionics | ||
__init__.py | ||
_actor.py | ||
_child.py | ||
_clustering.py | ||
_debug.py | ||
_discovery.py | ||
_entry.py | ||
_exceptions.py | ||
_forkserver_override.py | ||
_ipc.py | ||
_mp_fixup_main.py | ||
_portal.py | ||
_root.py | ||
_spawn.py | ||
_state.py | ||
_streaming.py | ||
_supervise.py | ||
log.py | ||
msg.py | ||
to_asyncio.py |