Decentralized compute layer
 
 
Go to file
Guillermo Rodriguez d749dc4f57
Improve worker ipfs input data parallelizm
2023-10-08 09:54:31 -03:00
docker Create separate docker images for python 3.10 and 3.11 2023-10-07 11:01:41 -03:00
skynet Improve worker ipfs input data parallelizm 2023-10-08 09:54:31 -03:00
tests Add test for new ipfs async apis, fix cli entrypoints endpoint loading to new format 2023-09-24 15:23:25 -03:00
.dockerignore Add venv to dockerignore 2023-10-07 11:01:40 -03:00
.gitignore Add new config to .gitignore 2023-10-07 11:12:45 -03:00
LICENSE Update LICENSE 2023-04-09 19:50:08 -03:00
README.md Update example --env no longer needed on docker 2023-10-07 12:42:05 -03:00
build_docker.sh Fix cli entrypoints to use new config, improve competitor cancel logic and add default docker image to py311 image 2023-10-07 12:32:00 -03:00
launch_ipfs.sh Add venv to dockerignore 2023-10-07 11:01:40 -03:00
poetry.lock Add new data gathering mechanic on worker and mp tractor backend 2023-10-07 21:28:52 -03:00
poetry.toml Switch to using poetry package manager 2023-10-07 11:01:40 -03:00
pyproject.toml Add new data gathering mechanic on worker and mp tractor backend 2023-10-07 21:28:52 -03:00
pytest.ini Add test for new ipfs async apis, fix cli entrypoints endpoint loading to new format 2023-09-24 15:23:25 -03:00
skynet.toml.example Switch config to toml 2023-10-07 11:01:41 -03:00

README.md

skynet

decentralized compute platform

To launch a worker:

native install

system dependencies: - cuda 11.8 - llvm 10 - python 3.10+ - docker (for ipfs node)

# create and edit config from template
cp skynet.toml.example skynet.toml

# install poetry package manager
curl -sSL https://install.python-poetry.org | python3 -

# install
poetry install

# enable environment
poetry shell

# test you can run this command
skynet --help

# launch ipfs node
skynet run ipfs

# to launch worker
skynet run dgpu

dockerized install

system dependencies: - docker with gpu enabled

# create and edit config from template
cp skynet.toml.example skynet.toml

# pull runtime container
docker pull guilledk/skynet:runtime-cuda

# or build it (takes a bit of time)
./build_docker.sh

# launch simple ipfs node
./launch_ipfs.sh

# run worker with all gpus
docker run \
    -it \
    --rm \
    --gpus all \
    --network host \
    --name skynet-worker \
    --mount type=bind,source="$(pwd)",target=/root/target \
    guilledk/skynet:runtime-cuda \
    skynet run dgpu

# run worker with specific gpu
docker run \
    -it \
    --rm \
    --gpus '"device=1"' \
    --network host \
    --name skynet-worker-1 \
    --mount type=bind,source="$(pwd)",target=/root/target \
    guilledk/skynet:runtime-cuda \
    skynet run dgpu