Parametrize our docs example test to include all (now fixed) examples
from the `READ.rst`. The examples themselves have been fixed/corrected
to run but they haven't yet been updated in the actual docs. Once #99
lands these example scripts will be directly included in our
documentation so there will be no possibility of presenting incorrect
examples to our users! This technically fixes#108 even though the new
example aren't going to be included directly in our docs until #99
lands.
Apply the fix from @chrizzFTD where we invoke the entry point using
module exec mode on a ``__main__.py`` and import the
``test_example::`main()` from within that entry point script.
A per #98 we need tests for examples from the docs as they would be run
by a user copy and pasting the code. This adds a small system for loading
examples from an "examples/" directory and executing them in
a subprocess while checking the output. We can use this to also verify
end-to-end expected logging output on std streams (ex. logging on
stderr).
To expand this further we can parameterize the test list using the
contents of the examples directory instead of hardcoding the script
names as I've done here initially.
Also, fix up the current readme examples to have the required/proper `if
__name__ == '__main__'` script guard.
Add a `--spawn-backend` option which can be set to one of {'mp',
'trio_run_in_process'} which will either run the test suite using the
`multiprocessing` or `trio-run-in-process` backend respectively.
Currently trying to run both in the same session can result in hangs
seemingly due to a lack of cleanup of forkservers / resource trackers
from `multiprocessing` which cause broken pipe errors on occasion (no
idea on the details).
For `test_cancellation.py::test_nested_multierrors`, use less nesting
when mp is used since it breaks if we push it too hard with the
whole recursive subprocess spawning thing...
It seems that mixing the two backends in the test suite results in hangs
due to lingering forkservers and resource managers from
`multiprocessing`? Likely we'll need either 2 separate CI runs to work
or someway to be sure that these lingering servers are killed in between
tests.
Another step toward having a complete test for #89.
Subactor breadth still seems to cause the most havoc and is why I've
kept that value to just 2 for now.
Add a test to verify that `trio.MultiError`s are properly propagated up
a simple actor nursery tree. We don't have any exception marshalling
between processes (yet) so we can't validate much more then a simple
2-depth tree. This satisfies the final bullet in #43.
Note I've limited the number of subactors per layer to around 5 since
any more then this seems to break the `multiprocessing` forkserver;
zombie subprocesses seem to be blocking teardown somehow...
Also add a single depth fast fail test just to verify that it's the
nested spawning that triggers this forkserver bug.
Can't seem to get the `capfd` fixture to capture subprocess logging to
stderr even though the console report shows the log message as being
captured? Skipping the test on the forkserver method for now.
In combination with `.aclose()`-ing the async gen instance returned from
`Portal.run()` this demonstrates the python bug:
https://bugs.python.org/issue32526
I've commented out the line that triggers the bug for now since this
case provides motivation for adding our own `trio.abc.ReceiveMemoryChannel`
implementation to be used instead of async gens directly (returned from
`Portal.run()`) since the latter is **not** task safe.
- steal from `trio` and add a `tractor_test` decorator
- use a random arbiter port to avoid conflicts with locally running
systems
- add all the (obviously) hilarious readme tests
- add a complex cancellation test which works with
`trio.move_on_after()`
Remove all the `piker` stuff and add some further checks including:
- main task result is returned correctly
- remote errors are raised locally
- remote async generator yields values locally