Commit Graph

1904 Commits (7859e743cc8e0799b1719c610d663a92afd0449d)

Author SHA1 Message Date
Tyler Goodlet c262b1a3e8 Always cancel the asyncio task? 2021-12-17 09:38:04 -05:00
Tyler Goodlet d9dac3f36c Drop old implementation cruft 2021-12-17 09:38:04 -05:00
Tyler Goodlet 325c0cdb1b Fix error propagation on asyncio streaming tasks 2021-12-17 09:38:04 -05:00
Tyler Goodlet 55e210fec6 Drop bad .close() call 2021-12-17 09:38:04 -05:00
Tyler Goodlet aa24bbc11c Proxy asyncio cancelleds as well 2021-12-17 09:38:04 -05:00
Tyler Goodlet 793bcfb7d4 Pass `infect_asyncio` flag to mp actors as well 2021-12-17 09:38:04 -05:00
Tyler Goodlet d80f8d7a39 WIP redo asyncio async gen streaming 2021-12-17 09:38:04 -05:00
Tyler Goodlet 340effae11 Add initial infected asyncio error propagation test 2021-12-17 09:38:01 -05:00
Tyler Goodlet 509ae132ec Raise any asyncio errors if in trio task on cancel 2021-12-17 09:38:01 -05:00
Tyler Goodlet 80f47dece2 Raise from asyncio error; fixes mypy 2021-12-17 09:38:01 -05:00
Tyler Goodlet 2cf87146a3 Log any asyncio error 2021-12-17 09:38:01 -05:00
Tyler Goodlet 8070b16bd0 Support asyncio actors with the trio spawner backend 2021-12-17 09:38:01 -05:00
Tyler Goodlet 1406ddc5ee Add `infect_asyncio: bool` flag to nursery methods 2021-12-17 09:37:41 -05:00
Tyler Goodlet 055788cf16 Attempt to make mypy happy.. 2021-12-17 09:19:23 -05:00
Tyler Goodlet 1825b21d2c Wow, fix all the broken async func invoking code..
Clearly this wasn't developed against a task that spawned just an async
func in `asyncio`.. Fix all that and remove a bunch of unnecessary func
layers. Add provisional support for the target receiving the `to_trio`
and `from_trio` channels and for the @tractor.stream marker.
2021-12-17 09:19:23 -05:00
Tyler Goodlet acd63d0c89 First draft "infected `asyncio` mode"
This should mostly maintain top level SC principles for any task spawned
using `tractor.to_asyncio.run()`. When the `asyncio` task completes make
sure to cancel the pertaining `trio` cancel scope and raise any error
that may have resulted. This interface uses `trio`'s "guest-mode" to run
`asyncio` loop using a special entrypoint which is handed to Python
during process spawn.
2021-12-17 09:17:59 -05:00
goodboy cdf1f8c2f7
Merge pull request #276 from goodboy/expected_ctx_cancelled
Expected ctx cancelled should not override a source error
2021-12-17 08:08:18 -05:00
Tyler Goodlet 8eff788d2d Pin to previous `trio_typing` release 2021-12-16 19:59:10 -05:00
Tyler Goodlet 916e27eedc Adjust cancelled test to expect raised overrun error 2021-12-16 19:59:10 -05:00
Tyler Goodlet 98a830ccba Drop cancel traceback capture; don't seem to need it? 2021-12-16 19:59:10 -05:00
Tyler Goodlet 8c004c1f36 Add an explicit messaging error for reporting an illegal context transaction 2021-12-16 19:59:10 -05:00
Tyler Goodlet e2139c2bf0 Don't set `Context._error` to expected `ContextCancelled`
If the one side of an inter-actor context cancels the other then that
side should always expect back a `ContextCancelled` message. However we
should not set this error in this case (where the cancel request was
sent and a `ContextCancelled` msg was received back) since it may
override some other error that caused the cancellation request to be
sent out in the first place. As an example when a context opens another
context to a peer and some error happens which causes the second peer
context to be cancelled but we want to propagate the original error.

Fixes the issue found in https://github.com/pikers/piker/issues/244
2021-12-16 19:59:10 -05:00
Tyler Goodlet 9650b010de Add a test for the real issue: error overriding
The underlying issue is actually that a nested `Context` which was
cancelled was overriding the local error that triggered that secondary's
context's cancellation in the first place XD. This test catches that
case.

Relates to https://github.com/pikers/piker/issues/244
2021-12-16 19:59:10 -05:00
Tyler Goodlet 5d424e3703 Hide the key error tb on remote starting errors 2021-12-16 19:59:10 -05:00
Tyler Goodlet c38d0f826e Add an unserializable value causes error before started test 2021-12-16 19:59:10 -05:00
goodboy 4001d2c3fc
Merge pull request #257 from goodboy/context_caching
Add `maybe_open_context()` an actor wide task-resource cache
2021-12-16 19:55:14 -05:00
Tyler Goodlet 953d15b67d Add nooz 2021-12-16 18:02:03 -05:00
Tyler Goodlet da5e36bf0c Revert back to avoiding key errors on cancellation 2021-12-16 18:02:03 -05:00
Tyler Goodlet 21a9c47496 Parameterize over cache keying methods: kwargs and "key" 2021-12-16 18:02:03 -05:00
Tyler Goodlet 67dc0d014c Add basic `maybe_open_context()` caching test 2021-12-16 18:02:03 -05:00
Tyler Goodlet 9b1d8bf7b0 Of course, increase the timeout for windows.. 2021-12-16 18:02:03 -05:00
Tyler Goodlet 26394dd8df Type annot fixes 2021-12-16 18:02:03 -05:00
Tyler Goodlet 11e64426f6 Wake all sleeping consumers on bcaster closure 2021-12-16 18:02:03 -05:00
Tyler Goodlet 213447008b Add draft code for waiting on all nurseries in root 2021-12-16 18:02:03 -05:00
Tyler Goodlet f617da6ff1 Add timeout around test and prints for guidance 2021-12-16 18:02:03 -05:00
Tyler Goodlet 52627a6326 Rework interface: pass func and kwargs
After more extensive testing I realized that keying on the context
manager *instance id* isn't going to work since each entering task is
going to create a unique key XD

Instead pass the manager function as `acm_func` and optionally allow
keying the resource on the passed `kwargs` (if hashable) or the
`key:str`. Further, pass the key to the enterer task and avoid
a separate keying scheme for the manager versus the value it delivers.
Don't bother with checking and releasing the lock in `finally:` block,
it should be an error if it's still locked.
2021-12-16 18:02:03 -05:00
Tyler Goodlet 3826bc9972 Don't catch key errors from the yielded to scope 2021-12-16 18:02:03 -05:00
Tyler Goodlet b210278e2f Naming change `cache` -> `_Cache` 2021-12-16 18:02:03 -05:00
Tyler Goodlet 4a0252baf2 Add task-cached stream test 2021-12-16 18:02:03 -05:00
Tyler Goodlet ac22b4a875 Fix type annots in resource cacher internals 2021-12-16 18:02:03 -05:00
Tyler Goodlet 5f41dbf34f Add `maybe_open_context()` an actor wide task-resource cache 2021-12-16 18:02:03 -05:00
goodboy 2d6fbd5437
Merge pull request #278 from goodboy/end_of_channel_fixes
End of channel fixes for streams and broadcasting
2021-12-16 18:01:04 -05:00
Tyler Goodlet 325e550ff3 Add nooz 2021-12-16 17:30:18 -05:00
Tyler Goodlet b5d62909ff Pin to `mypy` 0.910
Avoids the issue noted in
https://github.com/python-trio/trio-typing/issues/50
to keep CI green.
2021-12-16 16:20:07 -05:00
Tyler Goodlet 57f2aca18c Set eoc on closure (again) 2021-12-16 16:19:15 -05:00
Tyler Goodlet 1652716574 Add timeout to streaming test 2021-12-16 16:19:09 -05:00
Tyler Goodlet f2ba961e81 Mark stream with EOC when stop message is received 2021-12-16 16:18:58 -05:00
Tyler Goodlet 79d63585b0 Add a multi-task fan out streaming test
This actually catches a lot of bugs to do with stream termination and
``MsgStream.subscribe()`` usage where the underlying stream closes from
the producer side. When this passes the broadcaster logic will have to
ensure non-lossy fan out semantics and closure tracking.
2021-12-16 16:16:23 -05:00
Tyler Goodlet 3deb1b91e6 Wake all broadcast consumers on EOC
Without this wakeup you can have tasks which re-enter `.receive()`
and get stuck waiting on the wakeup event indefinitely. Whenever
a ``trio.EndOfChannel`` arrives we want to make sure all consumers
at least know about it and don't block. This previous behaviour was
basically a bug.

Add some state flags for tracking if the broadcaster was either
cancelled or terminated via EOC mostly for testing and debugging
purposes though this info might be useful if we decide to offer
a `.statistics()` like API in the future.
2021-12-16 16:16:14 -05:00
Tyler Goodlet 61e134dc5d Wake up consumers on end of channel as well 2021-12-16 16:15:54 -05:00