tractor/docs
Tyler Goodlet ab192741ce Fix code indent and worker pool link 2021-02-25 09:10:18 -05:00
..
_static Added logo, fixed github links and grammar issues 2020-08-31 11:49:14 -03:00
github_readme Revert auto-gen readme and merge in auto-gen code blocks by hand for now 2021-02-25 09:10:18 -05:00
Makefile Add initial sphinx docs draft 2020-02-10 12:14:16 -05:00
README.rst Fix code indent and worker pool link 2021-02-25 09:10:18 -05:00
conf.py Added logo, fixed github links and grammar issues 2020-08-31 11:49:14 -03:00
index.rst Allow passing function refs to `Portal.run()` 2020-12-21 09:09:55 -05:00
mk_gh_readme.sh Revert auto-gen readme and merge in auto-gen code blocks by hand for now 2021-02-25 09:10:18 -05:00

README.rst

logo tractor: next-gen Python parallelism

gh_actions Documentation Status

tractor is a structured concurrent "actor model" built on trio and multi-processing.

We pair structured concurrency with true multi-core Python parallelism; the aim is to be the multi-processing framework you always wanted.

The first steps to grok tractor is to get the basics of trio down. A great place to start is the trio docs and this blog post.

Features

  • It's just a trio API
  • Infinitely nesteable process trees
  • Built-in APIs for inter-process streaming
  • A (first ever?) "native" multi-core debugger for Python using pdb++
  • (Soon to land) asyncio support allowing for "infected" actors where trio drives the asyncio scheduler via the astounding "guest mode"
  • (Soon to land) typed messaging protocols (ex. via msgspec)

Zombie safe: self-destruct a process tree

tractor (attempts to) protect from zombies, no matter what.

"""
Run with a process monitor from a terminal using::

    $TERM -e watch -n 0.1  "pstree -a $$" \
        & python examples/parallelism/we_are_processes.py \
        && kill $!

"""
from multiprocessing import cpu_count
import os

import tractor
import trio


async def target():
    print(
        f"Yo, i'm '{tractor.current_actor().name}' "
        f"running in pid {os.getpid()}"
    )

   await trio.sleep_forever()


async def main():

    async with tractor.open_nursery() as n:

        for i in range(cpu_count()):
            await n.run_in_actor(target, name=f'worker_{i}')

        print('This process tree will self-destruct in 1 sec...')
        await trio.sleep(1)

        # you could have done this yourself
        raise Exception('Self Destructed')


if __name__ == '__main__':
    try:
        trio.run(main)
    except Exception:
        print('Zombies Contained')

If you can create zombie child processes (without using a system signal) it is a bug.

"Native" multi-process debugging

Using the magic of pdb++ and our internal IPC, we've been able to create a native feeling debugging experience for any (sub)-process in your tractor tree.

from os import getpid

import tractor
import trio


async def breakpoint_forever():
    "Indefinitely re-enter debugger in child actor."
    while True:
        yield 'yo'
        await tractor.breakpoint()


async def name_error():
    "Raise a ``NameError``"
    getattr(doggypants)


async def main():
    """Test breakpoint in a streaming actor.
    """
    async with tractor.open_nursery(
        debug_mode=True,
        loglevel='error',
    ) as n:

        p0 = await n.start_actor('bp_forever', enable_modules=[__name__])
        p1 = await n.start_actor('name_error', enable_modules=[__name__])

        # retreive results
        stream = await p0.run(breakpoint_forever)
        await p1.run(name_error)


if __name__ == '__main__':
    trio.run(main)

You can run this with:

>>> python examples/debugging/multi_daemon_subactors.py

And, yes, there's a built-in crash handling mode B)

We're hoping to add a respawn-from-repl system soon!

Worker poolz are easy peasy

It seems the initial ask from most new users is "how do I make a worker pool thing?".

tractor is built to handle any SC process tree you can imagine; the "worker pool" pattern is a trivial special case.

We have a full re-implementation of the std-lib's concurrent.futures.ProcessPoolExecutor example for reference.

You can run it like so (from this dir) to see the process tree in real time:

$TERM -e watch -n 0.1  "pstree -a $$" \
    & python examples/parallelism/concurrent_actors_primes.py \
    && kill $!

This uses no extra threads, fancy semaphores or futures; all we need is tractor's IPC!

Install

No PyPi release yet!

pip install git+git://github.com/goodboy/tractor.git

Under the hood

tractor is an attempt to pair trionic structured concurrency with distributed Python. You can think of it as a trio -across-processes or simply as an opinionated replacement for the stdlib's multiprocessing but built on async programming primitives from the ground up.

tractor's nurseries let you spawn trio "actors": new Python processes which each run a trio scheduled runtime - a call to trio.run().

Don't be scared off by this description. tractor is just trio but with nurseries for process management and cancel-able streaming IPC. If you understand how to work with trio, tractor will give you the parallelism you've been missing.

"Actors" communicate by exchanging asynchronous messages and avoid sharing state. The intention of this model is to allow for highly distributed software that, through the adherence to structured concurrency, results in systems which fail in predictable and recoverable ways.

Feel like saying hi?

This project is very much coupled to the ongoing development of trio (i.e. tractor gets most of its ideas from that brilliant community). If you want to help, have suggestions or just want to say hi, please feel free to reach us in our matrix channel. If matrix seems too hip, we're also mostly all in the the trio gitter channel!