Compare commits

...

60 Commits

Author SHA1 Message Date
Guillermo Rodriguez e88792c9d6
Fix docker paths 2024-11-02 14:52:22 -03:00
Guillermo Rodriguez 8a415b450f
Merge pull request #26 from guilledk/worker_upgrade_reloaded
Worker upgrade reloaded
2023-10-13 17:22:22 -03:00
Guillermo Rodriguez 8ddbf65d9f
Update pinner to new apis 2023-10-12 10:20:19 -03:00
Guillermo Rodriguez d9df50ad63
Minor readme tweaks 2023-10-09 08:52:04 -03:00
Guillermo Rodriguez 1222e11c16
Move docker related scripts to docker dir 2023-10-09 08:50:36 -03:00
Guillermo Rodriguez 3f780d6103
Add logo to readme 2023-10-09 08:44:33 -03:00
Guillermo Rodriguez 0a6d52ffaf
Fix missing quart dep 2023-10-09 07:50:39 -03:00
Guillermo Rodriguez f106c557f5
Fix readme 2023-10-09 07:43:57 -03:00
Guillermo Rodriguez 409df99a2c
Add frontend container & run instructions 2023-10-09 07:39:23 -03:00
Guillermo Rodriguez 20ee6c0317
Add configurable explorer and ipfs links 2023-10-08 20:12:07 -03:00
Guillermo Rodriguez edd6ccc3e1
Add worker benchmark api 2023-10-08 19:37:25 -03:00
Guillermo Rodriguez 3d2069d151
Simplify pipeline_for function and add the infra needed for diferent io/types than png 2023-10-08 18:00:18 -03:00
Guillermo Rodriguez ee1fdcc557
Go back to using gather on tg ipfs result getting 2023-10-08 16:36:29 -03:00
Guillermo Rodriguez 359e491d1f
Fix enqueue cli for img2img also fix worker img2img input get bug 2023-10-08 12:19:46 -03:00
Guillermo Rodriguez 50ae61c7b2
Add default value for autoconf 2023-10-08 11:06:06 -03:00
Guillermo Rodriguez cadd723191
Cancel other image task when one already finished on tg frontend ipfs image gather 2023-10-08 10:27:25 -03:00
Guillermo Rodriguez 16df97d731
Improve tg frontend ipfs results gathering parallelism 2023-10-08 10:13:55 -03:00
Guillermo Rodriguez d749dc4f57
Improve worker ipfs input data parallelizm 2023-10-08 09:54:31 -03:00
Guillermo Rodriguez aa1d52dba0
Add autoconfiguration feature for telegram frontend 2023-10-08 09:26:43 -03:00
Guillermo Rodriguez d3b5d56187
Add new data gathering mechanic on worker and mp tractor backend 2023-10-07 21:28:52 -03:00
Guillermo Rodriguez e802689523
Add new/legacy ipfs image mechanic on input image gathering 2023-10-07 14:55:30 -03:00
Guillermo Rodriguez 1780f1a360
Update example --env no longer needed on docker 2023-10-07 12:42:05 -03:00
Guillermo Rodriguez cc4a4b5189
Fix cli entrypoints to use new config, improve competitor cancel logic and add default docker image to py311 image 2023-10-07 12:32:00 -03:00
Guillermo Rodriguez 5437af4d05
Add new config to .gitignore 2023-10-07 11:12:45 -03:00
Guillermo Rodriguez 5b6e18e1ef
Fix import bug and only enable unet compilation on high end cards 2023-10-07 11:12:15 -03:00
Guillermo Rodriguez 7cd539a944
Make new non_compete optional, also ipfs_gateway 2023-10-07 11:11:47 -03:00
Guillermo Rodriguez b7b267a71b
Dont reference python version on docker instructions 2023-10-07 11:04:57 -03:00
Guillermo Rodriguez 342dd9ac1c
Add whitelist & blacklist 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez ad1a9ef9ea
Add anyio error to failable 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez b372f50130
Create separate docker images for python 3.10 and 3.11 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 9ef2442123
Switch config to toml 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 93203ab533
Only check if should cancel inference every two steps, also pipe to cuda if cpu offloading is off 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 9fa5a01c34
Fix image getting logic 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez a9b05b7ee7
Add try to make gateway conf optional on telegram client 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez de8c7595db
Fix some wrong config load keys on telegram entrypoint 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 10044c6d12
Add new ipfs links to telegram bot 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez c6e58c36d8
Make non compete list come from a file named .non-compete 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 01c78b5d20
Make gpu work cancellable using trio threading apis!, also make docker always reinstall package for easier development 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 47d9f59dbe
Start setting HF env vars from config 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez d7ccbe7023
Add --name to docker worker launch command 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 08854562ef
woops make xformers part of optional cuda group 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 537670d1f3
Fix mini bug on docker entry point 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 24fae4c451
Bump version number 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 3622c8ea11
Add venv to dockerignore
Improve readme
Improve dockerization as ipfs cli exec runs not needed anymore
Fix pyproject toml for gpu workers
Add more sections on example config
Drop and siomplify many cli commands, try to use config.ini for everything now
Use more dynamic imports on cli to speed up startup
Improve model pipelines to allow low mem cards to run big models
Add upscaler download to `skynet download` cmd
2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 454545d096
Switch to using poetry package manager 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 82458bb4c8
Merge pull request #23 from guilledk/full_http_ipfs
Async IPFS apis, drop docker on worker & other nice sprites ☠
2023-10-03 13:00:45 -03:00
Guillermo Rodriguez 504d6cd730
Add new ipfs options to cli frontend
Add async ipfs for discord
2023-09-28 21:43:30 -03:00
Guillermo Rodriguez fe4574c5dc
Remove old docker stuff and upgrade telegram frontend to use ipfs async apis 2023-09-28 21:23:04 -03:00
Guillermo Rodriguez 1b13cf25cc
Add test for new ipfs async apis, fix cli entrypoints endpoint loading to new format 2023-09-24 15:23:25 -03:00
Guillermo Rodriguez 58f208afa2
Update config example 2023-09-24 13:18:05 -03:00
Guillermo Rodriguez 01cbc736a0
Create fully async ipfs client, and stop using docker on worker 2023-09-24 13:12:49 -03:00
Zoltan 7f50952088
Merge pull request #19 from guilledk/general_frontend_fixes
General frontend fixes
2023-08-23 11:57:18 -04:00
Konstantine Tsafatinos 75268decc4 fix default model and increments for discord ui 2023-08-23 11:53:41 -04:00
Guillermo Rodriguez 35f8276e4e
Fixed default cmdline testnet urls 2023-07-28 12:05:02 -03:00
Guillermo Rodriguez ffcf9dc905
Merge pull request #5 from guilledk/decentralize
First fully decentralized `skynet` prototype
2023-07-28 11:16:51 -03:00
Guillermo Rodriguez c201b78bf0
Fix help text, increase ipfs get image timeout, fix work_request increment generated bug pointed out by zoltan 2023-07-28 11:13:59 -03:00
Guillermo Rodriguez 713884e192
Provide both xl models 2023-07-27 13:25:00 -03:00
Guillermo Rodriguez 440bb015cd
Fix stablexl pipeline 2023-07-27 12:19:09 -03:00
Guillermo Rodriguez 4082adf184
Update stablexl 0.9 to 1.0 2023-07-26 16:44:46 -03:00
Guillermo Rodriguez 89c413a612
Bump version number, also telegram max image limit and disable in private for now 2023-07-22 16:53:00 -03:00
61 changed files with 5307 additions and 5438 deletions

View File

@ -7,3 +7,4 @@ outputs
*.egg-info *.egg-info
**/*.key **/*.key
**/*.cert **/*.cert
.venv

4
.gitignore vendored
View File

@ -1,4 +1,4 @@
skynet.ini skynet.toml
.python-version .python-version
hf_home hf_home
outputs outputs
@ -9,5 +9,5 @@ secrets
**/*.cert **/*.cert
docs docs
ipfs-docker-data ipfs-docker-data
ipfs-docker-staging ipfs-staging
weights weights

104
README.md
View File

@ -1,30 +1,104 @@
# skynet # skynet
### decentralized compute platform <div align="center">
<img src="https://explorer.skygpu.net/v2/explore/assets/logo.png" width=512 height=512>
</div>
## decentralized compute platform
### native install
system dependencies:
- `cuda` 11.8
- `llvm` 10
- `python` 3.10+
- `docker` (for ipfs node)
To launch a worker:
``` ```
# create and edit config from template # create and edit config from template
cp skynet.ini.example skynet.ini cp skynet.toml.example skynet.toml
# create python virtual envoirment 3.10+ # install poetry package manager
python3 -m venv venv curl -sSL https://install.python-poetry.org | python3 -
# enable envoirment # install
source venv/bin/activate poetry install
# install requirements # enable environment
pip install -r requirements.txt poetry shell
pip install -r requirements.cuda.0.txt
pip install -r requirements.cuda.1.txt
pip install -r requirements.cuda.2.txt
# install skynet
pip install -e .
# test you can run this command # test you can run this command
skynet --help skynet --help
# launch ipfs node
skynet run ipfs
# to launch worker # to launch worker
skynet run dgpu skynet run dgpu
``` ```
### dockerized install
## frontend
system dependencies:
- `docker`
```
# create and edit config from template
cp skynet.toml.example skynet.toml
# pull runtime container
docker pull guilledk/skynet:runtime-frontend
# run telegram bot
docker run \
-it \
--rm \
--network host \
--name skynet-telegram \
--mount type=bind,source="$(pwd)",target=/root/target \
guilledk/skynet:runtime-frontend \
skynet run telegram --db-pass PASSWORD --db-user USER --db-host HOST
```
## worker
system dependencies:
- `docker` with gpu enabled
```
# create and edit config from template
cp skynet.toml.example skynet.toml
# pull runtime container
docker pull guilledk/skynet:runtime-cuda
# or build it (takes a bit of time)
./build_docker.sh
# launch simple ipfs node
./launch_ipfs.sh
# run worker with all gpus
docker run \
-it \
--rm \
--gpus all \
--network host \
--name skynet-worker \
--mount type=bind,source="$(pwd)",target=/root/target \
guilledk/skynet:runtime-cuda \
skynet run dgpu
# run worker with specific gpu
docker run \
-it \
--rm \
--gpus '"device=1"' \
--network host \
--name skynet-worker-1 \
--mount type=bind,source="$(pwd)",target=/root/target \
guilledk/skynet:runtime-cuda \
skynet run dgpu
```

View File

@ -1,7 +0,0 @@
docker build \
-t skynet:runtime-cuda \
-f docker/Dockerfile.runtime+cuda .
docker build \
-t skynet:runtime \
-f docker/Dockerfile.runtime .

View File

@ -1,16 +1,25 @@
from python:3.10.0 from python:3.11
env DEBIAN_FRONTEND=noninteractive env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet workdir /skynet
copy requirements.txt requirements.txt env POETRY_VIRTUALENVS_PATH /skynet/.venv
copy pytest.ini ./
copy setup.py ./
copy skynet ./skynet
run pip install \ run poetry install
-e . \
-r requirements.txt
copy tests ./ workdir /root/target
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -1,29 +0,0 @@
from nvidia/cuda:11.7.0-devel-ubuntu20.04
from python:3.11
env DEBIAN_FRONTEND=noninteractive
run apt-get update && \
apt-get install -y ffmpeg libsm6 libxext6
workdir /skynet
copy requirements.cuda* ./
run pip install -U pip ninja
run pip install -v -r requirements.cuda.0.txt
run pip install -v -r requirements.cuda.1.txt
run pip install -v -r requirements.cuda.2.txt
copy requirements.txt requirements.txt
copy pytest.ini pytest.ini
copy setup.py setup.py
copy skynet skynet
run pip install -e . -r requirements.txt
env PYTORCH_CUDA_ALLOC_CONF max_split_size_mb:128
env NVIDIA_VISIBLE_DEVICES=all
env HF_HOME /hf_home
copy tests tests

View File

@ -0,0 +1,46 @@
from nvidia/cuda:11.8.0-devel-ubuntu20.04
from python:3.10
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git \
clang \
cmake \
ffmpeg \
libsm6 \
libxext6 \
ninja-build
env CC /usr/bin/clang
env CXX /usr/bin/clang++
# install llvm10 as required by llvm-lite
run git clone https://github.com/llvm/llvm-project.git -b llvmorg-10.0.1
workdir /llvm-project
# this adds a commit from 12.0.0 that fixes build on newer compilers
run git cherry-pick -n b498303066a63a203d24f739b2d2e0e56dca70d1
run cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
run ninja -C build install # -j8
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install --with=cuda -v
workdir /root/target
env PYTORCH_CUDA_ALLOC_CONF max_split_size_mb:128
env NVIDIA_VISIBLE_DEVICES=all
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,46 @@
from nvidia/cuda:11.8.0-devel-ubuntu20.04
from python:3.11
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git \
clang \
cmake \
ffmpeg \
libsm6 \
libxext6 \
ninja-build
env CC /usr/bin/clang
env CXX /usr/bin/clang++
# install llvm10 as required by llvm-lite
run git clone https://github.com/llvm/llvm-project.git -b llvmorg-10.0.1
workdir /llvm-project
# this adds a commit from 12.0.0 that fixes build on newer compilers
run git cherry-pick -n b498303066a63a203d24f739b2d2e0e56dca70d1
run cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
run ninja -C build install # -j8
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install --with=cuda -v
workdir /root/target
env PYTORCH_CUDA_ALLOC_CONF max_split_size_mb:128
env NVIDIA_VISIBLE_DEVICES=all
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,25 @@
from python:3.11
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install --with=frontend -v
workdir /root/target
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,20 @@
docker build \
-t guilledk/skynet:runtime \
-f docker/Dockerfile.runtime .
docker build \
-t guilledk/skynet:runtime-frontend \
-f docker/Dockerfile.runtime+frontend .
docker build \
-t guilledk/skynet:runtime-cuda-py311 \
-f docker/Dockerfile.runtime+cuda-py311 .
docker build \
-t guilledk/skynet:runtime-cuda \
-f docker/Dockerfile.runtime+cuda-py311 .
docker build \
-t guilledk/skynet:runtime-cuda-py310 \
-f docker/Dockerfile.runtime+cuda-py310 .

View File

@ -0,0 +1,8 @@
#!/bin/sh
export VIRTUAL_ENV='/skynet/.venv'
poetry env use $VIRTUAL_ENV/bin/python
poetry install
exec poetry run "$@"

View File

@ -1,22 +0,0 @@
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y wget
# install eosio tools
RUN wget https://github.com/AntelopeIO/leap/releases/download/v4.0.1/leap_4.0.1-ubuntu22.04_amd64.deb
RUN apt-get install -y ./leap_4.0.1-ubuntu22.04_amd64.deb
RUN mkdir -p /root/nodeos
WORKDIR /root/nodeos
COPY config.ini config.ini
COPY contracts contracts
COPY genesis genesis
EXPOSE 42000
EXPOSE 29876
EXPOSE 39999
CMD sleep 9999999999

View File

@ -1,52 +0,0 @@
agent-name = Telos Skynet Testnet
wasm-runtime = eos-vm-jit
eos-vm-oc-compile-threads = 4
eos-vm-oc-enable = true
chain-state-db-size-mb = 65536
enable-account-queries = true
http-server-address = 0.0.0.0:42000
access-control-allow-origin = *
contracts-console = true
http-validate-host = false
p2p-listen-endpoint = 0.0.0.0:29876
p2p-server-address = 0.0.0.0:29876
verbose-http-errors = true
state-history-endpoint = 0.0.0.0:39999
trace-history = true
chain-state-history = true
trace-history-debug-mode = true
state-history-dir = state-history
sync-fetch-span = 1600
max-clients = 250
signature-provider = EOS5fLreY5Zq5owBhmNJTgQaLqQ4ufzXSTpStQakEyfxNFuUEgNs1=KEY:5JnvSc6pewpHHuUHwvbJopsew6AKwiGnexwDRc2Pj2tbdw6iML9
disable-subjective-billing = true
max-transaction-time = 500
read-only-read-window-time-us = 600000
abi-serializer-max-time-ms = 2000000
p2p-max-nodes-per-host = 1
connection-cleanup-period = 30
allowed-connection = any
http-max-response-time-ms = 100000
max-body-size = 10000000
enable-stale-production = true
plugin = eosio::http_plugin
plugin = eosio::chain_plugin
plugin = eosio::chain_api_plugin
plugin = eosio::net_api_plugin
plugin = eosio::net_plugin
plugin = eosio::producer_plugin
plugin = eosio::producer_api_plugin
plugin = eosio::state_history_plugin

View File

@ -1,360 +0,0 @@
{
"____comment": "This file was generated with eosio-abigen. DO NOT EDIT Thu Apr 14 07:49:43 2022",
"version": "eosio::abi/1.1",
"structs": [
{
"name": "action",
"base": "",
"fields": [
{
"name": "account",
"type": "name"
},
{
"name": "name",
"type": "name"
},
{
"name": "authorization",
"type": "permission_level[]"
},
{
"name": "data",
"type": "bytes"
}
]
},
{
"name": "approval",
"base": "",
"fields": [
{
"name": "level",
"type": "permission_level"
},
{
"name": "time",
"type": "time_point"
}
]
},
{
"name": "approvals_info",
"base": "",
"fields": [
{
"name": "version",
"type": "uint8"
},
{
"name": "proposal_name",
"type": "name"
},
{
"name": "requested_approvals",
"type": "approval[]"
},
{
"name": "provided_approvals",
"type": "approval[]"
}
]
},
{
"name": "approve",
"base": "",
"fields": [
{
"name": "proposer",
"type": "name"
},
{
"name": "proposal_name",
"type": "name"
},
{
"name": "level",
"type": "permission_level"
},
{
"name": "proposal_hash",
"type": "checksum256$"
}
]
},
{
"name": "cancel",
"base": "",
"fields": [
{
"name": "proposer",
"type": "name"
},
{
"name": "proposal_name",
"type": "name"
},
{
"name": "canceler",
"type": "name"
}
]
},
{
"name": "exec",
"base": "",
"fields": [
{
"name": "proposer",
"type": "name"
},
{
"name": "proposal_name",
"type": "name"
},
{
"name": "executer",
"type": "name"
}
]
},
{
"name": "extension",
"base": "",
"fields": [
{
"name": "type",
"type": "uint16"
},
{
"name": "data",
"type": "bytes"
}
]
},
{
"name": "invalidate",
"base": "",
"fields": [
{
"name": "account",
"type": "name"
}
]
},
{
"name": "invalidation",
"base": "",
"fields": [
{
"name": "account",
"type": "name"
},
{
"name": "last_invalidation_time",
"type": "time_point"
}
]
},
{
"name": "old_approvals_info",
"base": "",
"fields": [
{
"name": "proposal_name",
"type": "name"
},
{
"name": "requested_approvals",
"type": "permission_level[]"
},
{
"name": "provided_approvals",
"type": "permission_level[]"
}
]
},
{
"name": "permission_level",
"base": "",
"fields": [
{
"name": "actor",
"type": "name"
},
{
"name": "permission",
"type": "name"
}
]
},
{
"name": "proposal",
"base": "",
"fields": [
{
"name": "proposal_name",
"type": "name"
},
{
"name": "packed_transaction",
"type": "bytes"
}
]
},
{
"name": "propose",
"base": "",
"fields": [
{
"name": "proposer",
"type": "name"
},
{
"name": "proposal_name",
"type": "name"
},
{
"name": "requested",
"type": "permission_level[]"
},
{
"name": "trx",
"type": "transaction"
}
]
},
{
"name": "transaction",
"base": "transaction_header",
"fields": [
{
"name": "context_free_actions",
"type": "action[]"
},
{
"name": "actions",
"type": "action[]"
},
{
"name": "transaction_extensions",
"type": "extension[]"
}
]
},
{
"name": "transaction_header",
"base": "",
"fields": [
{
"name": "expiration",
"type": "time_point_sec"
},
{
"name": "ref_block_num",
"type": "uint16"
},
{
"name": "ref_block_prefix",
"type": "uint32"
},
{
"name": "max_net_usage_words",
"type": "varuint32"
},
{
"name": "max_cpu_usage_ms",
"type": "uint8"
},
{
"name": "delay_sec",
"type": "varuint32"
}
]
},
{
"name": "unapprove",
"base": "",
"fields": [
{
"name": "proposer",
"type": "name"
},
{
"name": "proposal_name",
"type": "name"
},
{
"name": "level",
"type": "permission_level"
}
]
}
],
"types": [],
"actions": [
{
"name": "approve",
"type": "approve",
"ricardian_contract": ""
},
{
"name": "cancel",
"type": "cancel",
"ricardian_contract": ""
},
{
"name": "exec",
"type": "exec",
"ricardian_contract": ""
},
{
"name": "invalidate",
"type": "invalidate",
"ricardian_contract": ""
},
{
"name": "propose",
"type": "propose",
"ricardian_contract": ""
},
{
"name": "unapprove",
"type": "unapprove",
"ricardian_contract": ""
}
],
"tables": [
{
"name": "approvals",
"type": "old_approvals_info",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "approvals2",
"type": "approvals_info",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "invals",
"type": "invalidation",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "proposal",
"type": "proposal",
"index_type": "i64",
"key_names": [],
"key_types": []
}
],
"ricardian_clauses": [],
"variants": [],
"abi_extensions": []
}

File diff suppressed because one or more lines are too long

View File

@ -1,185 +0,0 @@
{
"____comment": "This file was generated with eosio-abigen. DO NOT EDIT ",
"version": "eosio::abi/1.1",
"types": [],
"structs": [
{
"name": "account",
"base": "",
"fields": [
{
"name": "balance",
"type": "asset"
}
]
},
{
"name": "close",
"base": "",
"fields": [
{
"name": "owner",
"type": "name"
},
{
"name": "symbol",
"type": "symbol"
}
]
},
{
"name": "create",
"base": "",
"fields": [
{
"name": "issuer",
"type": "name"
},
{
"name": "maximum_supply",
"type": "asset"
}
]
},
{
"name": "currency_stats",
"base": "",
"fields": [
{
"name": "supply",
"type": "asset"
},
{
"name": "max_supply",
"type": "asset"
},
{
"name": "issuer",
"type": "name"
}
]
},
{
"name": "issue",
"base": "",
"fields": [
{
"name": "to",
"type": "name"
},
{
"name": "quantity",
"type": "asset"
},
{
"name": "memo",
"type": "string"
}
]
},
{
"name": "open",
"base": "",
"fields": [
{
"name": "owner",
"type": "name"
},
{
"name": "symbol",
"type": "symbol"
},
{
"name": "ram_payer",
"type": "name"
}
]
},
{
"name": "retire",
"base": "",
"fields": [
{
"name": "quantity",
"type": "asset"
},
{
"name": "memo",
"type": "string"
}
]
},
{
"name": "transfer",
"base": "",
"fields": [
{
"name": "from",
"type": "name"
},
{
"name": "to",
"type": "name"
},
{
"name": "quantity",
"type": "asset"
},
{
"name": "memo",
"type": "string"
}
]
}
],
"actions": [
{
"name": "close",
"type": "close",
"ricardian_contract": "---\nspec_version: \"0.2.0\"\ntitle: Close Token Balance\nsummary: 'Close {{nowrap owner}}s zero quantity balance'\nicon: http://127.0.0.1/ricardian_assets/eosio.contracts/icons/token.png#207ff68b0406eaa56618b08bda81d6a0954543f36adc328ab3065f31a5c5d654\n---\n\n{{owner}} agrees to close their zero quantity balance for the {{symbol_to_symbol_code symbol}} token.\n\nRAM will be refunded to the RAM payer of the {{symbol_to_symbol_code symbol}} token balance for {{owner}}."
},
{
"name": "create",
"type": "create",
"ricardian_contract": "---\nspec_version: \"0.2.0\"\ntitle: Create New Token\nsummary: 'Create a new token'\nicon: http://127.0.0.1/ricardian_assets/eosio.contracts/icons/token.png#207ff68b0406eaa56618b08bda81d6a0954543f36adc328ab3065f31a5c5d654\n---\n\n{{$action.account}} agrees to create a new token with symbol {{asset_to_symbol_code maximum_supply}} to be managed by {{issuer}}.\n\nThis action will not result any any tokens being issued into circulation.\n\n{{issuer}} will be allowed to issue tokens into circulation, up to a maximum supply of {{maximum_supply}}.\n\nRAM will deducted from {{$action.account}}s resources to create the necessary records."
},
{
"name": "issue",
"type": "issue",
"ricardian_contract": "---\nspec_version: \"0.2.0\"\ntitle: Issue Tokens into Circulation\nsummary: 'Issue {{nowrap quantity}} into circulation and transfer into {{nowrap to}}s account'\nicon: http://127.0.0.1/ricardian_assets/eosio.contracts/icons/token.png#207ff68b0406eaa56618b08bda81d6a0954543f36adc328ab3065f31a5c5d654\n---\n\nThe token manager agrees to issue {{quantity}} into circulation, and transfer it into {{to}}s account.\n\n{{#if memo}}There is a memo attached to the transfer stating:\n{{memo}}\n{{/if}}\n\nIf {{to}} does not have a balance for {{asset_to_symbol_code quantity}}, or the token manager does not have a balance for {{asset_to_symbol_code quantity}}, the token manager will be designated as the RAM payer of the {{asset_to_symbol_code quantity}} token balance for {{to}}. As a result, RAM will be deducted from the token managers resources to create the necessary records.\n\nThis action does not allow the total quantity to exceed the max allowed supply of the token."
},
{
"name": "open",
"type": "open",
"ricardian_contract": "---\nspec_version: \"0.2.0\"\ntitle: Open Token Balance\nsummary: 'Open a zero quantity balance for {{nowrap owner}}'\nicon: http://127.0.0.1/ricardian_assets/eosio.contracts/icons/token.png#207ff68b0406eaa56618b08bda81d6a0954543f36adc328ab3065f31a5c5d654\n---\n\n{{ram_payer}} agrees to establish a zero quantity balance for {{owner}} for the {{symbol_to_symbol_code symbol}} token.\n\nIf {{owner}} does not have a balance for {{symbol_to_symbol_code symbol}}, {{ram_payer}} will be designated as the RAM payer of the {{symbol_to_symbol_code symbol}} token balance for {{owner}}. As a result, RAM will be deducted from {{ram_payer}}s resources to create the necessary records."
},
{
"name": "retire",
"type": "retire",
"ricardian_contract": "---\nspec_version: \"0.2.0\"\ntitle: Remove Tokens from Circulation\nsummary: 'Remove {{nowrap quantity}} from circulation'\nicon: http://127.0.0.1/ricardian_assets/eosio.contracts/icons/token.png#207ff68b0406eaa56618b08bda81d6a0954543f36adc328ab3065f31a5c5d654\n---\n\nThe token manager agrees to remove {{quantity}} from circulation, taken from their own account.\n\n{{#if memo}} There is a memo attached to the action stating:\n{{memo}}\n{{/if}}"
},
{
"name": "transfer",
"type": "transfer",
"ricardian_contract": "---\nspec_version: \"0.2.0\"\ntitle: Transfer Tokens\nsummary: 'Send {{nowrap quantity}} from {{nowrap from}} to {{nowrap to}}'\nicon: http://127.0.0.1/ricardian_assets/eosio.contracts/icons/transfer.png#5dfad0df72772ee1ccc155e670c1d124f5c5122f1d5027565df38b418042d1dd\n---\n\n{{from}} agrees to send {{quantity}} to {{to}}.\n\n{{#if memo}}There is a memo attached to the transfer stating:\n{{memo}}\n{{/if}}\n\nIf {{from}} is not already the RAM payer of their {{asset_to_symbol_code quantity}} token balance, {{from}} will be designated as such. As a result, RAM will be deducted from {{from}}s resources to refund the original RAM payer.\n\nIf {{to}} does not have a balance for {{asset_to_symbol_code quantity}}, {{from}} will be designated as the RAM payer of the {{asset_to_symbol_code quantity}} token balance for {{to}}. As a result, RAM will be deducted from {{from}}s resources to create the necessary records."
}
],
"tables": [
{
"name": "accounts",
"type": "account",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "stat",
"type": "currency_stats",
"index_type": "i64",
"key_names": [],
"key_types": []
}
],
"ricardian_clauses": [],
"variants": []
}

View File

@ -1,130 +0,0 @@
{
"____comment": "This file was generated with eosio-abigen. DO NOT EDIT Thu Apr 14 07:49:40 2022",
"version": "eosio::abi/1.1",
"structs": [
{
"name": "action",
"base": "",
"fields": [
{
"name": "account",
"type": "name"
},
{
"name": "name",
"type": "name"
},
{
"name": "authorization",
"type": "permission_level[]"
},
{
"name": "data",
"type": "bytes"
}
]
},
{
"name": "exec",
"base": "",
"fields": [
{
"name": "executer",
"type": "name"
},
{
"name": "trx",
"type": "transaction"
}
]
},
{
"name": "extension",
"base": "",
"fields": [
{
"name": "type",
"type": "uint16"
},
{
"name": "data",
"type": "bytes"
}
]
},
{
"name": "permission_level",
"base": "",
"fields": [
{
"name": "actor",
"type": "name"
},
{
"name": "permission",
"type": "name"
}
]
},
{
"name": "transaction",
"base": "transaction_header",
"fields": [
{
"name": "context_free_actions",
"type": "action[]"
},
{
"name": "actions",
"type": "action[]"
},
{
"name": "transaction_extensions",
"type": "extension[]"
}
]
},
{
"name": "transaction_header",
"base": "",
"fields": [
{
"name": "expiration",
"type": "time_point_sec"
},
{
"name": "ref_block_num",
"type": "uint16"
},
{
"name": "ref_block_prefix",
"type": "uint32"
},
{
"name": "max_net_usage_words",
"type": "varuint32"
},
{
"name": "max_cpu_usage_ms",
"type": "uint8"
},
{
"name": "delay_sec",
"type": "varuint32"
}
]
}
],
"types": [],
"actions": [
{
"name": "exec",
"type": "exec",
"ricardian_contract": ""
}
],
"tables": [],
"ricardian_clauses": [],
"variants": [],
"abi_extensions": []
}

File diff suppressed because it is too large Load Diff

View File

@ -1,25 +0,0 @@
{
"initial_timestamp": "2023-05-22T00:00:00.000",
"initial_key": "EOS5fLreY5Zq5owBhmNJTgQaLqQ4ufzXSTpStQakEyfxNFuUEgNs1",
"initial_configuration": {
"max_block_net_usage": 1048576,
"target_block_net_usage_pct": 1000,
"max_transaction_net_usage": 1048575,
"base_per_transaction_net_usage": 12,
"net_usage_leeway": 500,
"context_free_discount_net_usage_num": 20,
"context_free_discount_net_usage_den": 100,
"max_block_cpu_usage": 200000,
"target_block_cpu_usage_pct": 1000,
"max_transaction_cpu_usage": 150000,
"min_transaction_cpu_usage": 100,
"max_transaction_lifetime": 3600,
"deferred_trx_expiration_window": 600,
"max_transaction_delay": 3888000,
"max_inline_action_size": 4096,
"max_inline_action_depth": 4,
"max_authority_depth": 6
}
}

View File

@ -0,0 +1,5 @@
docker push guilledk/skynet:runtime
docker push guilledk/skynet:runtime-frontend
docker push guilledk/skynet:runtime-cuda
docker push guilledk/skynet:runtime-cuda-py311
docker push guilledk/skynet:runtime-cuda-py310

36
launch_ipfs.sh 100755
View File

@ -0,0 +1,36 @@
#!/bin/bash
name='skynet-ipfs'
peers=("$@")
data_dir="$(pwd)/ipfs-docker-data"
data_target='/data/ipfs'
# Create data directory if it doesn't exist
mkdir -p "$data_dir"
# Run the container
docker run -d \
--name "$name" \
-p 8080:8080/tcp \
-p 4001:4001/tcp \
-p 127.0.0.1:5001:5001/tcp \
--mount type=bind,source="$data_dir",target="$data_target" \
--rm \
ipfs/go-ipfs:latest
# Change ownership
docker exec "$name" chown 1000:1000 -R "$data_target"
# Wait for Daemon to be ready
while read -r log; do
echo "$log"
if [[ "$log" == *"Daemon is ready"* ]]; then
break
fi
done < <(docker logs -f "$name")
# Connect to peers
for peer in "${peers[@]}"; do
docker exec "$name" ipfs swarm connect "$peer" || echo "Error connecting to peer: $peer"
done

3835
poetry.lock generated 100644

File diff suppressed because it is too large Load Diff

2
poetry.toml 100644
View File

@ -0,0 +1,2 @@
[virtualenvs]
in-project = true

67
pyproject.toml 100644
View File

@ -0,0 +1,67 @@
[tool.poetry]
name = 'skynet'
version = '0.1a12'
description = 'Decentralized compute platform'
authors = ['Guillermo Rodriguez <guillermo@telos.net>']
license = 'AGPL'
readme = 'README.md'
[tool.poetry.dependencies]
python = '>=3.10,<3.12'
pytz = '^2023.3.post1'
trio = '^0.22.2'
asks = '^3.0.0'
Pillow = '^10.0.1'
docker = '^6.1.3'
py-leap = {git = 'https://github.com/guilledk/py-leap.git', rev = 'v0.1a14'}
toml = '^0.10.2'
[tool.poetry.group.frontend]
optional = true
[tool.poetry.group.frontend.dependencies]
triopg = {version = '^0.6.0'}
aiohttp = {version = '^3.8.5'}
psycopg2-binary = {version = '^2.9.7'}
pyTelegramBotAPI = {version = '^4.14.0'}
'discord.py' = {version = '^2.3.2'}
[tool.poetry.group.dev]
optional = true
[tool.poetry.group.dev.dependencies]
pdbpp = {version = '^0.10.3'}
pytest = {version = '^7.4.2'}
[tool.poetry.group.cuda]
optional = true
[tool.poetry.group.cuda.dependencies]
torch = {version = '2.0.1+cu118', source = 'torch'}
scipy = {version = '^1.11.2'}
numba = {version = '0.57.0'}
quart = {version = '^0.19.3'}
triton = {version = '2.0.0', source = 'torch'}
basicsr = {version = '^1.4.2'}
xformers = {version = '^0.0.22'}
hypercorn = {version = '^0.14.4'}
diffusers = {version = '^0.21.2'}
realesrgan = {version = '^0.3.0'}
quart-trio = {version = '^0.11.0'}
torchvision = {version = '0.15.2+cu118', source = 'torch'}
accelerate = {version = '^0.23.0'}
transformers = {version = '^4.33.2'}
huggingface-hub = {version = '^0.17.3'}
invisible-watermark = {version = '^0.2.0'}
[[tool.poetry.source]]
name = 'torch'
url = 'https://download.pytorch.org/whl/cu118'
priority = 'explicit'
[build-system]
requires = ['poetry-core', 'cython']
build-backend = 'poetry.core.masonry.api'
[tool.poetry.scripts]
skynet = 'skynet.cli:skynet'

View File

@ -1,3 +1,4 @@
[pytest] [pytest]
log_cli = True log_cli = True
log_level = info log_level = info
trio_mode = True

View File

@ -1,9 +0,0 @@
scipy
triton
accelerate
transformers
huggingface_hub
diffusers[torch]>=0.18.0
invisible-watermark
torch==1.13.0+cu117
--extra-index-url https://download.pytorch.org/whl/cu117

View File

@ -1 +0,0 @@
git+https://github.com/facebookresearch/xformers.git@main#egg=xformers

View File

@ -1,2 +0,0 @@
basicsr
realesrgan

View File

@ -1,15 +0,0 @@
pytz
trio
asks
numpy
pdbpp
Pillow
triopg
pytest
docker
aiohttp
psycopg2-binary
pyTelegramBotAPI
discord.py
py-leap@git+https://github.com/guilledk/py-leap.git@v0.1a14

View File

@ -1,21 +0,0 @@
from setuptools import setup, find_packages
from skynet.constants import VERSION
setup(
name='skynet',
version=VERSION,
description='Decentralized compute platform',
author='Guillermo Rodriguez',
author_email='guillermo@telos.net',
packages=find_packages(),
entry_points={
'console_scripts': [
'skynet = skynet.cli:skynet',
'txt2img = skynet.cli:txt2img',
'img2img = skynet.cli:img2img',
'upscale = skynet.cli:upscale'
]
},
install_requires=['click']
)

View File

@ -1,35 +0,0 @@
[skynet.dgpu]
account = testworkerX
permission = active
key = 5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
node_url = https://skynet.ancap.tech
hyperion_url = https://skynet.ancap.tech
ipfs_url = /ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv
hf_home = hf_home
hf_token = hf_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx
auto_withdraw = True
[skynet.telegram]
account = telegram
permission = active
key = 5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
node_url = https://skynet.ancap.tech
hyperion_url = https://skynet.ancap.tech
ipfs_url = /ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv
token = XXXXXXXXXX:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[skynet.discord]
account = discord
permission = active
key = 5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
node_url = https://skynet.ancap.tech
hyperion_url = https://skynet.ancap.tech
ipfs_url = /ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv
token = XXXXXXXXXX:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

View File

@ -0,0 +1,47 @@
# config sections are optional, depending on which services
# you wish to run
[skynet.dgpu]
account = 'testworkerX'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'
hyperion_url = 'https://testnet.skygpu.net'
ipfs_gateway_url = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
ipfs_url = 'http://127.0.0.1:5001'
hf_home = 'hf_home'
hf_token = 'hf_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx'
auto_withdraw = true
non_compete = []
api_bind = '127.0.0.1:42690'
[skynet.telegram]
account = 'telegram'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'
hyperion_url = 'https://testnet.skygpu.net'
ipfs_gateway_url = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
ipfs_url = 'http://127.0.0.1:5001'
token = 'XXXXXXXXXX:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
[skynet.discord]
account = 'discord'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'
hyperion_url = 'https://testnet.skygpu.net'
ipfs_gateway_url = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
ipfs_url = 'http://127.0.0.1:5001'
token = 'XXXXXXXXXX:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
[skynet.pinner]
hyperion_url = 'https://testnet.skygpu.net'
ipfs_url = 'http://127.0.0.1:5001'
[skynet.user]
account = 'testuser'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'

View File

@ -6,25 +6,12 @@ import random
from functools import partial from functools import partial
import trio
import asks
import click import click
import asyncio
import requests
from leap.cleos import CLEOS from leap.sugar import Name, asset_from_str
from leap.sugar import collect_stdout, Name, asset_from_str
from leap.hyperion import HyperionAPI
from skynet.ipfs import IPFSHTTP
from .db import open_new_database
from .config import * from .config import *
from .nodeos import open_cleos, open_nodeos
from .constants import * from .constants import *
from .frontend.telegram import SkynetTelegramFrontend
from .frontend.discord import SkynetDiscordFrontend
@click.group() @click.group()
@ -44,7 +31,11 @@ def skynet(*args, **kwargs):
@click.option('--seed', '-S', default=None) @click.option('--seed', '-S', default=None)
def txt2img(*args, **kwargs): def txt2img(*args, **kwargs):
from . import utils from . import utils
_, hf_token, _, _ = init_env_from_config()
config = load_skynet_toml()
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
utils.txt2img(hf_token, **kwargs) utils.txt2img(hf_token, **kwargs)
@click.command() @click.command()
@ -59,7 +50,10 @@ def txt2img(*args, **kwargs):
@click.option('--seed', '-S', default=None) @click.option('--seed', '-S', default=None)
def img2img(model, prompt, input, output, strength, guidance, steps, seed): def img2img(model, prompt, input, output, strength, guidance, steps, seed):
from . import utils from . import utils
_, hf_token, _, _ = init_env_from_config() config = load_skynet_toml()
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
utils.img2img( utils.img2img(
hf_token, hf_token,
model=model, model=model,
@ -87,48 +81,55 @@ def upscale(input, output, model):
@skynet.command() @skynet.command()
def download(): def download():
from . import utils from . import utils
_, hf_token, _, _ = init_env_from_config() config = load_skynet_toml()
utils.download_all_models(hf_token) hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
utils.download_all_models(hf_token, hf_home)
@skynet.command() @skynet.command()
@click.option(
'--account', '-A', default=None)
@click.option(
'--permission', '-P', default=None)
@click.option(
'--key', '-k', default=None)
@click.option(
'--node-url', '-n', default='https://skynet.ancap.tech')
@click.option( @click.option(
'--reward', '-r', default='20.0000 GPU') '--reward', '-r', default='20.0000 GPU')
@click.option('--jobs', '-j', default=1) @click.option('--jobs', '-j', default=1)
@click.option('--model', '-m', default='prompthero/openjourney') @click.option('--model', '-m', default='stabilityai/stable-diffusion-xl-base-1.0')
@click.option( @click.option(
'--prompt', '-p', default='a red old tractor in a sunny wheat field') '--prompt', '-p', default='a red old tractor in a sunny wheat field')
@click.option('--output', '-o', default='output.png') @click.option('--output', '-o', default='output.png')
@click.option('--width', '-w', default=512) @click.option('--width', '-w', default=1024)
@click.option('--height', '-h', default=512) @click.option('--height', '-h', default=1024)
@click.option('--guidance', '-g', default=10) @click.option('--guidance', '-g', default=10)
@click.option('--step', '-s', default=26) @click.option('--step', '-s', default=26)
@click.option('--seed', '-S', default=None) @click.option('--seed', '-S', default=None)
@click.option('--upscaler', '-U', default='x4') @click.option('--upscaler', '-U', default='x4')
@click.option('--binary_data', '-b', default='') @click.option('--binary_data', '-b', default='')
@click.option('--strength', '-Z', default=None)
def enqueue( def enqueue(
account: str,
permission: str,
key: str | None,
node_url: str,
reward: str, reward: str,
jobs: int, jobs: int,
**kwargs **kwargs
): ):
key, account, permission = load_account_info( import trio
'user', key, account, permission) from leap.cleos import CLEOS
node_url, _, _ = load_endpoint_info( config = load_skynet_toml()
'user', node_url, None, None)
key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
binary = kwargs['binary_data']
if not kwargs['strength']:
if binary:
raise ValueError('strength -Z param required if binary data passed')
del kwargs['strength']
else:
kwargs['strength'] = float(kwargs['strength'])
with open_cleos(node_url, key=key) as cleos:
async def enqueue_n_jobs(): async def enqueue_n_jobs():
for i in range(jobs): for i in range(jobs):
if not kwargs['seed']: if not kwargs['seed']:
@ -138,7 +139,6 @@ def enqueue(
'method': 'diffuse', 'method': 'diffuse',
'params': kwargs 'params': kwargs
}) })
binary = kwargs['binary_data']
res = await cleos.a_push_action( res = await cleos.a_push_action(
'telos.gpu', 'telos.gpu',
@ -153,31 +153,23 @@ def enqueue(
account, key, permission, account, key, permission,
) )
print(res) print(res)
trio.run(enqueue_n_jobs) trio.run(enqueue_n_jobs)
@skynet.command() @skynet.command()
@click.option('--loglevel', '-l', default='INFO', help='Logging level') @click.option('--loglevel', '-l', default='INFO', help='Logging level')
@click.option(
'--account', '-A', default='telos.gpu')
@click.option(
'--permission', '-P', default='active')
@click.option(
'--key', '-k', default=None)
@click.option(
'--node-url', '-n', default='https://skynet.ancap.tech')
def clean( def clean(
loglevel: str, loglevel: str,
account: str,
permission: str,
key: str | None,
node_url: str,
): ):
key, account, permission = load_account_info( import trio
'user', key, account, permission) from leap.cleos import CLEOS
node_url, _, _ = load_endpoint_info( config = load_skynet_toml()
'user', node_url, None, None) key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
logging.basicConfig(level=loglevel) logging.basicConfig(level=loglevel)
cleos = CLEOS(None, None, url=node_url, remote=node_url) cleos = CLEOS(None, None, url=node_url, remote=node_url)
@ -192,11 +184,10 @@ def clean(
) )
@skynet.command() @skynet.command()
@click.option( def queue():
'--node-url', '-n', default='https://skynet.ancap.tech') import requests
def queue(node_url: str): config = load_skynet_toml()
node_url, _, _ = load_endpoint_info( node_url = load_key(config, 'skynet.user.node_url')
'user', node_url, None, None)
resp = requests.post( resp = requests.post(
f'{node_url}/v1/chain/get_table_rows', f'{node_url}/v1/chain/get_table_rows',
json={ json={
@ -209,12 +200,11 @@ def queue(node_url: str):
print(json.dumps(resp.json(), indent=4)) print(json.dumps(resp.json(), indent=4))
@skynet.command() @skynet.command()
@click.option(
'--node-url', '-n', default='https://skynet.ancap.tech')
@click.argument('request-id') @click.argument('request-id')
def status(node_url: str, request_id: int): def status(request_id: int):
node_url, _, _ = load_endpoint_info( import requests
'user', node_url, None, None) config = load_skynet_toml()
node_url = load_key(config, 'skynet.user.node_url')
resp = requests.post( resp = requests.post(
f'{node_url}/v1/chain/get_table_rows', f'{node_url}/v1/chain/get_table_rows',
json={ json={
@ -227,106 +217,86 @@ def status(node_url: str, request_id: int):
print(json.dumps(resp.json(), indent=4)) print(json.dumps(resp.json(), indent=4))
@skynet.command() @skynet.command()
@click.option(
'--account', '-a', default='telegram')
@click.option(
'--permission', '-p', default='active')
@click.option(
'--key', '-k', default=None)
@click.option(
'--node-url', '-n', default='https://skynet.ancap.tech')
@click.argument('request-id') @click.argument('request-id')
def dequeue( def dequeue(request_id: int):
account: str, import trio
permission: str, from leap.cleos import CLEOS
key: str | None,
node_url: str,
request_id: int
):
key, account, permission = load_account_info(
'user', key, account, permission)
node_url, _, _ = load_endpoint_info( config = load_skynet_toml()
'user', node_url, None, None) key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
with open_cleos(node_url, key=key) as cleos: cleos = CLEOS(None, None, url=node_url, remote=node_url)
res = trio.run(cleos.a_push_action, res = trio.run(
partial(
cleos.a_push_action,
'telos.gpu', 'telos.gpu',
'dequeue', 'dequeue',
{ {
'user': Name(account), 'user': Name(account),
'request_id': int(request_id), 'request_id': int(request_id),
}, },
account, key, permission, account, key, permission=permission
)
) )
print(res) print(res)
@skynet.command() @skynet.command()
@click.option(
'--account', '-a', default='telos.gpu')
@click.option(
'--permission', '-p', default='active')
@click.option(
'--key', '-k', default=None)
@click.option(
'--node-url', '-n', default='https://skynet.ancap.tech')
@click.option( @click.option(
'--token-contract', '-c', default='eosio.token') '--token-contract', '-c', default='eosio.token')
@click.option( @click.option(
'--token-symbol', '-S', default='4,GPU') '--token-symbol', '-S', default='4,GPU')
def config( def config(
account: str,
permission: str,
key: str | None,
node_url: str,
token_contract: str, token_contract: str,
token_symbol: str token_symbol: str
): ):
key, account, permission = load_account_info( import trio
'user', key, account, permission) from leap.cleos import CLEOS
node_url, _, _ = load_endpoint_info( config = load_skynet_toml()
'user', node_url, None, None)
with open_cleos(node_url, key=key) as cleos: key = load_key(config, 'skynet.user.key')
res = trio.run(cleos.a_push_action, account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
res = trio.run(
partial(
cleos.a_push_action,
'telos.gpu', 'telos.gpu',
'config', 'config',
{ {
'token_contract': token_contract, 'token_contract': token_contract,
'token_symbol': token_symbol, 'token_symbol': token_symbol,
}, },
account, key, permission, account, key, permission=permission
)
) )
print(res) print(res)
@skynet.command() @skynet.command()
@click.option(
'--account', '-a', default='telegram')
@click.option(
'--permission', '-p', default='active')
@click.option(
'--key', '-k', default=None)
@click.option(
'--node-url', '-n', default='https://skynet.ancap.tech')
@click.argument('quantity') @click.argument('quantity')
def deposit( def deposit(quantity: str):
account: str, import trio
permission: str, from leap.cleos import CLEOS
key: str | None,
node_url: str,
quantity: str
):
key, account, permission = load_account_info(
'user', key, account, permission)
node_url, _, _ = load_endpoint_info( config = load_skynet_toml()
'user', node_url, None, None)
with open_cleos(node_url, key=key) as cleos: key = load_key(config, 'skynet.user.key')
res = trio.run(cleos.a_push_action, account = load_key(config, 'skynet.user.account')
'eosio.token', permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
res = trio.run(
partial(
cleos.a_push_action,
'telos.gpu',
'transfer', 'transfer',
{ {
'sender': Name(account), 'sender': Name(account),
@ -334,7 +304,8 @@ def deposit(
'amount': asset_from_str(quantity), 'amount': asset_from_str(quantity),
'memo': f'{account} transferred {quantity} to telos.gpu' 'memo': f'{account} transferred {quantity} to telos.gpu'
}, },
account, key, permission, account, key, permission=permission
)
) )
print(res) print(res)
@ -345,6 +316,8 @@ def run(*args, **kwargs):
@run.command() @run.command()
def db(): def db():
from .db import open_new_database
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
with open_new_database(cleanup=False) as db_params: with open_new_database(cleanup=False) as db_params:
container, passwd, host = db_params container, passwd, host = db_params
@ -352,6 +325,8 @@ def db():
@run.command() @run.command()
def nodeos(): def nodeos():
from .nodeos import open_nodeos
logging.basicConfig(filename='skynet-nodeos.log', level=logging.INFO) logging.basicConfig(filename='skynet-nodeos.log', level=logging.INFO)
with open_nodeos(cleanup=False): with open_nodeos(cleanup=False):
... ...
@ -359,36 +334,29 @@ def nodeos():
@run.command() @run.command()
@click.option('--loglevel', '-l', default='INFO', help='Logging level') @click.option('--loglevel', '-l', default='INFO', help='Logging level')
@click.option( @click.option(
'--config-path', '-c', default='skynet.ini') '--config-path', '-c', default=DEFAULT_CONFIG_PATH)
def dgpu( def dgpu(
loglevel: str, loglevel: str,
config_path: str config_path: str
): ):
import trio
from .dgpu import open_dgpu_node from .dgpu import open_dgpu_node
logging.basicConfig(level=loglevel) logging.basicConfig(level=loglevel)
config = load_skynet_ini(file_path=config_path) config = load_skynet_toml(file_path=config_path)
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
assert 'skynet.dgpu' in config assert 'skynet' in config
assert 'dgpu' in config['skynet']
trio.run(open_dgpu_node, config['skynet.dgpu']) trio.run(open_dgpu_node, config['skynet']['dgpu'])
@run.command() @run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level') @click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option(
'--account', '-a', default='telegram')
@click.option(
'--permission', '-p', default='active')
@click.option(
'--key', '-k', default=None)
@click.option(
'--hyperion-url', '-y', default=f'https://{DEFAULT_DOMAIN}')
@click.option(
'--node-url', '-n', default=f'https://{DEFAULT_DOMAIN}')
@click.option(
'--ipfs-url', '-i', default=DEFAULT_IPFS_REMOTE)
@click.option( @click.option(
'--db-host', '-h', default='localhost:5432') '--db-host', '-h', default='localhost:5432')
@click.option( @click.option(
@ -397,25 +365,43 @@ def dgpu(
'--db-pass', '-u', default='password') '--db-pass', '-u', default='password')
def telegram( def telegram(
loglevel: str, loglevel: str,
account: str,
permission: str,
key: str | None,
hyperion_url: str,
ipfs_url: str,
node_url: str,
db_host: str, db_host: str,
db_user: str, db_user: str,
db_pass: str db_pass: str
): ):
import asyncio
from .frontend.telegram import SkynetTelegramFrontend
logging.basicConfig(level=loglevel) logging.basicConfig(level=loglevel)
_, _, tg_token, _ = init_env_from_config() config = load_skynet_toml()
tg_token = load_key(config, 'skynet.telegram.tg_token')
key, account, permission = load_account_info( key = load_key(config, 'skynet.telegram.key')
'telegram', key, account, permission) account = load_key(config, 'skynet.telegram.account')
permission = load_key(config, 'skynet.telegram.permission')
node_url = load_key(config, 'skynet.telegram.node_url')
hyperion_url = load_key(config, 'skynet.telegram.hyperion_url')
node_url, _, ipfs_url = load_endpoint_info( try:
'telegram', node_url, None, None) ipfs_gateway_url = load_key(config, 'skynet.telegram.ipfs_gateway_url')
except ConfigParsingError:
ipfs_gateway_url = None
ipfs_url = load_key(config, 'skynet.telegram.ipfs_url')
try:
explorer_domain = load_key(config, 'skynet.telegram.explorer_domain')
except ConfigParsingError:
explorer_domain = DEFAULT_EXPLORER_DOMAIN
try:
ipfs_domain = load_key(config, 'skynet.telegram.ipfs_domain')
except ConfigParsingError:
ipfs_domain = DEFAULT_IPFS_DOMAIN
async def _async_main(): async def _async_main():
frontend = SkynetTelegramFrontend( frontend = SkynetTelegramFrontend(
@ -425,8 +411,11 @@ def telegram(
node_url, node_url,
hyperion_url, hyperion_url,
db_host, db_user, db_pass, db_host, db_user, db_pass,
remote_ipfs_node=ipfs_url, ipfs_url,
key=key remote_ipfs_node=ipfs_gateway_url,
key=key,
explorer_domain=explorer_domain,
ipfs_domain=ipfs_domain
) )
async with frontend.open(): async with frontend.open():
@ -438,18 +427,6 @@ def telegram(
@run.command() @run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level') @click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option(
'--account', '-a', default='discord')
@click.option(
'--permission', '-p', default='active')
@click.option(
'--key', '-k', default=None)
@click.option(
'--hyperion-url', '-y', default=f'https://{DEFAULT_DOMAIN}')
@click.option(
'--node-url', '-n', default=f'https://{DEFAULT_DOMAIN}')
@click.option(
'--ipfs-url', '-i', default=DEFAULT_IPFS_REMOTE)
@click.option( @click.option(
'--db-host', '-h', default='localhost:5432') '--db-host', '-h', default='localhost:5432')
@click.option( @click.option(
@ -458,25 +435,38 @@ def telegram(
'--db-pass', '-u', default='password') '--db-pass', '-u', default='password')
def discord( def discord(
loglevel: str, loglevel: str,
account: str,
permission: str,
key: str | None,
hyperion_url: str,
ipfs_url: str,
node_url: str,
db_host: str, db_host: str,
db_user: str, db_user: str,
db_pass: str db_pass: str
): ):
import asyncio
from .frontend.discord import SkynetDiscordFrontend
logging.basicConfig(level=loglevel) logging.basicConfig(level=loglevel)
_, _, _, dc_token = init_env_from_config() config = load_skynet_toml()
dc_token = load_key(config, 'skynet.discord.dc_token')
key, account, permission = load_account_info( key = load_key(config, 'skynet.discord.key')
'discord', key, account, permission) account = load_key(config, 'skynet.discord.account')
permission = load_key(config, 'skynet.discord.permission')
node_url = load_key(config, 'skynet.discord.node_url')
hyperion_url = load_key(config, 'skynet.discord.hyperion_url')
node_url, _, ipfs_url = load_endpoint_info( ipfs_gateway_url = load_key(config, 'skynet.discord.ipfs_gateway_url')
'discord', node_url, None, None) ipfs_url = load_key(config, 'skynet.discord.ipfs_url')
try:
explorer_domain = load_key(config, 'skynet.discord.explorer_domain')
except ConfigParsingError:
explorer_domain = DEFAULT_EXPLORER_DOMAIN
try:
ipfs_domain = load_key(config, 'skynet.discord.ipfs_domain')
except ConfigParsingError:
ipfs_domain = DEFAULT_IPFS_DOMAIN
async def _async_main(): async def _async_main():
frontend = SkynetDiscordFrontend( frontend = SkynetDiscordFrontend(
@ -486,8 +476,11 @@ def discord(
node_url, node_url,
hyperion_url, hyperion_url,
db_host, db_user, db_pass, db_host, db_user, db_pass,
remote_ipfs_node=ipfs_url, ipfs_url,
key=key remote_ipfs_node=ipfs_gateway_url,
key=key,
explorer_domain=explorer_domain,
ipfs_domain=ipfs_domain
) )
async with frontend.open(): async with frontend.open():
@ -499,24 +492,28 @@ def discord(
@run.command() @run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level') @click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option('--name', '-n', default='skynet-ipfs', help='container name') @click.option('--name', '-n', default='skynet-ipfs', help='container name')
def ipfs(loglevel, name): @click.option('--peer', '-p', default=(), help='connect to peer', multiple=True, type=str)
def ipfs(loglevel, name, peer):
from skynet.ipfs.docker import open_ipfs_node from skynet.ipfs.docker import open_ipfs_node
logging.basicConfig(level=loglevel) logging.basicConfig(level=loglevel)
with open_ipfs_node(name=name): with open_ipfs_node(name=name, peers=peer):
... ...
@run.command() @run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level') @click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option( def pinner(loglevel):
'--ipfs-rpc', '-i', default='http://127.0.0.1:5001') import trio
@click.option( from leap.hyperion import HyperionAPI
'--hyperion-url', '-y', default='http://127.0.0.1:42001') from .ipfs import AsyncIPFSHTTP
def pinner(loglevel, ipfs_rpc, hyperion_url):
from .ipfs.pinner import SkynetPinner from .ipfs.pinner import SkynetPinner
config = load_skynet_toml()
hyperion_url = load_key(config, 'skynet.pinner.hyperion_url')
ipfs_url = load_key(config, 'skynet.pinner.ipfs_url')
logging.basicConfig(level=loglevel) logging.basicConfig(level=loglevel)
ipfs_node = IPFSHTTP(ipfs_rpc) ipfs_node = AsyncIPFSHTTP(ipfs_url)
hyperion = HyperionAPI(hyperion_url) hyperion = HyperionAPI(hyperion_url)
pinner = SkynetPinner(hyperion, ipfs_node) pinner = SkynetPinner(hyperion, ipfs_node)

View File

@ -1,113 +1,33 @@
#!/usr/bin/python #!/usr/bin/python
import os import os
import json import toml
from pathlib import Path from pathlib import Path
from configparser import ConfigParser
from re import sub
from .constants import DEFAULT_CONFIG_PATH from .constants import DEFAULT_CONFIG_PATH
def load_skynet_ini( class ConfigParsingError(BaseException):
file_path=DEFAULT_CONFIG_PATH ...
):
config = ConfigParser()
config.read(file_path) def load_skynet_toml(file_path=DEFAULT_CONFIG_PATH) -> dict:
config = toml.load(file_path)
return config
def load_key(config: dict, key: str) -> str:
for skey in key.split('.'):
if skey not in config:
conf_keys = [k for k in config]
raise ConfigParsingError(f'key \"{skey}\" not in {conf_keys}')
config = config[skey]
return config return config
def init_env_from_config( def set_hf_vars(hf_token: str, hf_home: str):
hf_token: str | None = None,
hf_home: str | None = None,
tg_token: str | None = None,
dc_token: str | None = None,
file_path=DEFAULT_CONFIG_PATH
):
config = load_skynet_ini(file_path=file_path)
if 'HF_TOKEN' in os.environ:
hf_token = os.environ['HF_TOKEN']
elif 'skynet.dgpu' in config:
sub_config = config['skynet.dgpu']
if 'hf_token' in sub_config:
hf_token = sub_config['hf_token']
os.environ['HF_TOKEN'] = hf_token os.environ['HF_TOKEN'] = hf_token
if 'HF_HOME' in os.environ:
hf_home = os.environ['HF_HOME']
elif 'skynet.dgpu' in config:
sub_config = config['skynet.dgpu']
if 'hf_home' in sub_config:
hf_home = sub_config['hf_home']
os.environ['HF_HOME'] = hf_home os.environ['HF_HOME'] = hf_home
if 'TG_TOKEN' in os.environ:
tg_token = os.environ['TG_TOKEN']
elif 'skynet.telegram' in config:
sub_config = config['skynet.telegram']
if 'token' in sub_config:
tg_token = sub_config['token']
if 'DC_TOKEN' in os.environ:
dc_token = os.environ['DC_TOKEN']
elif 'skynet.discord' in config:
sub_config = config['skynet.discord']
if 'token' in sub_config:
dc_token = sub_config['token']
return hf_home, hf_token, tg_token, dc_token
def load_account_info(
_type: str,
key: str | None = None,
account: str | None = None,
permission: str | None = None,
file_path=DEFAULT_CONFIG_PATH
):
config = load_skynet_ini(file_path=file_path)
type_key = f'skynet.{_type}'
if type_key in config:
sub_config = config[type_key]
if not key and 'key' in sub_config:
key = sub_config['key']
if not account and 'account' in sub_config:
account = sub_config['account']
if not permission and 'permission' in sub_config:
permission = sub_config['permission']
return key, account, permission
def load_endpoint_info(
_type: str,
node_url: str | None = None,
hyperion_url: str | None = None,
ipfs_url: str | None = None,
file_path=DEFAULT_CONFIG_PATH
):
config = load_skynet_ini(file_path=file_path)
type_key = f'skynet.{_type}'
if type_key in config:
sub_config = config[type_key]
if not node_url and 'node_url' in sub_config:
node_url = sub_config['node_url']
if not hyperion_url and 'hyperion_url' in sub_config:
hyperion_url = sub_config['hyperion_url']
if not ipfs_url and 'ipfs_url' in sub_config:
ipfs_url = sub_config['ipfs_url']
return node_url, hyperion_url, ipfs_url

View File

@ -1,21 +1,24 @@
#!/usr/bin/python #!/usr/bin/python
VERSION = '0.1a10' VERSION = '0.1a12'
DOCKER_RUNTIME_CUDA = 'skynet:runtime-cuda' DOCKER_RUNTIME_CUDA = 'skynet:runtime-cuda'
MODELS = { MODELS = {
'prompthero/openjourney': { 'short': 'midj'}, 'prompthero/openjourney': {'short': 'midj', 'mem': 6},
'runwayml/stable-diffusion-v1-5': { 'short': 'stable'}, 'runwayml/stable-diffusion-v1-5': {'short': 'stable', 'mem': 6},
'stabilityai/stable-diffusion-2-1-base': { 'short': 'stable2'}, 'stabilityai/stable-diffusion-2-1-base': {'short': 'stable2', 'mem': 6},
'snowkidy/stable-diffusion-xl-base-0.9': { 'short': 'stablexl'}, 'snowkidy/stable-diffusion-xl-base-0.9': {'short': 'stablexl0.9', 'mem': 8.3},
'Linaqruf/anything-v3.0': { 'short': 'hdanime'}, 'Linaqruf/anything-v3.0': {'short': 'hdanime', 'mem': 6},
'hakurei/waifu-diffusion': { 'short': 'waifu'}, 'hakurei/waifu-diffusion': {'short': 'waifu', 'mem': 6},
'nitrosocke/Ghibli-Diffusion': { 'short': 'ghibli'}, 'nitrosocke/Ghibli-Diffusion': {'short': 'ghibli', 'mem': 6},
'dallinmackay/Van-Gogh-diffusion': { 'short': 'van-gogh'}, 'dallinmackay/Van-Gogh-diffusion': {'short': 'van-gogh', 'mem': 6},
'lambdalabs/sd-pokemon-diffusers': { 'short': 'pokemon'}, 'lambdalabs/sd-pokemon-diffusers': {'short': 'pokemon', 'mem': 6},
'Envvi/Inkpunk-Diffusion': { 'short': 'ink'}, 'Envvi/Inkpunk-Diffusion': {'short': 'ink', 'mem': 6},
'nousr/robo-diffusion': { 'short': 'robot'} 'nousr/robo-diffusion': {'short': 'robot', 'mem': 6},
# default is always last
'stabilityai/stable-diffusion-xl-base-1.0': {'short': 'stablexl', 'mem': 8.3},
} }
SHORT_NAMES = [ SHORT_NAMES = [
@ -36,7 +39,7 @@ commands work on a user per user basis!
config is individual to each user! config is individual to each user!
/txt2img TEXT - request an image based on a prompt /txt2img TEXT - request an image based on a prompt
/img2img <attach_image> TEXT - request an image base on an image and a promtp /img2img <attach_image> TEXT - request an image base on an image and a prompt
/redo - redo last command (only works for txt2img for now!) /redo - redo last command (only works for txt2img for now!)
@ -53,14 +56,17 @@ config is individual to each user!
{N.join(SHORT_NAMES)} {N.join(SHORT_NAMES)}
/config step NUMBER - set amount of iterations /config step NUMBER - set amount of iterations
/config seed NUMBER - set the seed, deterministic results! /config seed [auto|NUMBER] - set the seed, deterministic results!
/config size WIDTH HEIGHT - set size in pixels /config width NUMBER - set horizontal size in pixels
/config height NUMBER - set vertical size in pixels
/config upscaler [off/x4] - enable/disable x4 size upscaler
/config guidance NUMBER - prompt text importance /config guidance NUMBER - prompt text importance
/config strength NUMBER - importance of the input image for img2img
''' '''
UNKNOWN_CMD_TEXT = 'Unknown command! Try sending \"/help\"' UNKNOWN_CMD_TEXT = 'Unknown command! Try sending \"/help\"'
DONATION_INFO = '0xf95335682DF281FFaB7E104EB87B69625d9622B6\ngoal: 25/650usd' DONATION_INFO = '0xf95335682DF281FFaB7E104EB87B69625d9622B6\ngoal: 0.0465/1.0000 ETH'
COOL_WORDS = [ COOL_WORDS = [
'cyberpunk', 'cyberpunk',
@ -119,10 +125,19 @@ ing more may produce a slightly different picture, but not necessarily better \
quality. quality.
''', ''',
'guidance': ''' 'guidance': '''
The guidance scale is a parameter that controls how much the image generation\ The guidance scale is a parameter that controls how much the image generation\
process follows the text prompt. The higher the value, the more image sticks\ process follows the text prompt. The higher the value, the more image sticks\
to a given text input. to a given text input. Value range 0 to 20. Recomended range: 4.5-7.5.
''',
'strength': '''
Noise is added to the image you use as an init image for img2img, and then the\
diffusion process continues according to the prompt. The amount of noise added\
depends on the \"Strength of img2img\"” parameter, which ranges from 0 to 1,\
where 0 adds no noise at all and you will get the exact image you added, and\
1 completely replaces the image with noise and almost acts as if you used\
normal txt2img instead of img2img.
''' '''
} }
@ -139,21 +154,20 @@ MAX_HEIGHT = 1024
MAX_GUIDANCE = 20 MAX_GUIDANCE = 20
DEFAULT_SEED = None DEFAULT_SEED = None
DEFAULT_WIDTH = 512 DEFAULT_WIDTH = 1024
DEFAULT_HEIGHT = 512 DEFAULT_HEIGHT = 1024
DEFAULT_GUIDANCE = 7.5 DEFAULT_GUIDANCE = 7.5
DEFAULT_STRENGTH = 0.5 DEFAULT_STRENGTH = 0.5
DEFAULT_STEP = 35 DEFAULT_STEP = 28
DEFAULT_CREDITS = 10 DEFAULT_CREDITS = 10
DEFAULT_MODEL = list(MODELS.keys())[0] DEFAULT_MODEL = list(MODELS.keys())[-1]
DEFAULT_ROLE = 'pleb' DEFAULT_ROLE = 'pleb'
DEFAULT_UPSCALER = None DEFAULT_UPSCALER = None
DEFAULT_CONFIG_PATH = 'skynet.ini' DEFAULT_CONFIG_PATH = 'skynet.toml'
DEFAULT_INITAL_MODELS = [ DEFAULT_INITAL_MODELS = [
'prompthero/openjourney', 'stabilityai/stable-diffusion-xl-base-1.0'
'runwayml/stable-diffusion-v1-5'
] ]
DATE_FORMAT = '%B the %dth %Y, %H:%M:%S' DATE_FORMAT = '%B the %dth %Y, %H:%M:%S'
@ -169,6 +183,13 @@ CONFIG_ATTRS = [
'upscaler' 'upscaler'
] ]
DEFAULT_DOMAIN = 'skygpu.net' DEFAULT_EXPLORER_DOMAIN = 'explorer.skygpu.net'
DEFAULT_IPFS_DOMAIN = 'ipfs.skygpu.net'
DEFAULT_IPFS_REMOTE = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv' DEFAULT_IPFS_REMOTE = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
DEFAULT_IPFS_LOCAL = 'http://127.0.0.1:5001'
TG_MAX_WIDTH = 1280
TG_MAX_HEIGHT = 1280
DEFAULT_SINGLE_CARD_MAP = 'cuda:0'

View File

@ -43,6 +43,7 @@ CREATE TABLE IF NOT EXISTS skynet.user_config(
guidance DECIMAL NOT NULL, guidance DECIMAL NOT NULL,
strength DECIMAL NOT NULL, strength DECIMAL NOT NULL,
upscaler VARCHAR(128), upscaler VARCHAR(128),
autoconf BOOLEAN DEFAULT TRUE,
CONSTRAINT fk_config CONSTRAINT fk_config
FOREIGN KEY(id) FOREIGN KEY(id)
REFERENCES skynet.user(id) REFERENCES skynet.user(id)
@ -165,6 +166,15 @@ async def open_database_connection(
else: else:
await conn.execute(DB_INIT_SQL) await conn.execute(DB_INIT_SQL)
col_check = await conn.fetch(f'''
select column_name
from information_schema.columns
where table_name = 'user_config' and column_name = 'autoconf';
''')
if not col_check:
await conn.execute('alter table skynet.user_config add column autoconf boolean default true;')
async def _db_call(method: str, *args, **kwargs): async def _db_call(method: str, *args, **kwargs):
method = getattr(db, method) method = getattr(db, method)

View File

@ -2,6 +2,9 @@
import trio import trio
from hypercorn.config import Config
from hypercorn.trio import serve
from skynet.dgpu.compute import SkynetMM from skynet.dgpu.compute import SkynetMM
from skynet.dgpu.daemon import SkynetDGPUDaemon from skynet.dgpu.daemon import SkynetDGPUDaemon
from skynet.dgpu.network import SkynetGPUConnector from skynet.dgpu.network import SkynetGPUConnector
@ -10,7 +13,18 @@ from skynet.dgpu.network import SkynetGPUConnector
async def open_dgpu_node(config: dict): async def open_dgpu_node(config: dict):
conn = SkynetGPUConnector(config) conn = SkynetGPUConnector(config)
mm = SkynetMM(config) mm = SkynetMM(config)
daemon = SkynetDGPUDaemon(mm, conn, config)
async with conn.open() as conn: api = None
await (SkynetDGPUDaemon(mm, conn, config) if 'api_bind' in config:
.serve_forever()) api_conf = Config()
api_conf.bind = [config['api_bind']]
api = await daemon.generate_api()
async with trio.open_nursery() as n:
n.start_soon(daemon.snap_updater_task)
if api:
n.start_soon(serve, api, api_conf)
await daemon.serve_forever()

View File

@ -3,31 +3,43 @@
# Skynet Memory Manager # Skynet Memory Manager
import gc import gc
from hashlib import sha256
import json
import logging import logging
from hashlib import sha256
import zipfile
from PIL import Image
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
import trio
import torch import torch
from skynet.constants import DEFAULT_INITAL_MODELS, MODELS
from skynet.dgpu.errors import DGPUComputeError
from skynet.utils import convert_from_bytes_and_crop, convert_from_cv2_to_image, convert_from_image_to_cv2, convert_from_img_to_bytes, init_upscaler, pipeline_for from skynet.constants import DEFAULT_INITAL_MODELS, MODELS
from skynet.dgpu.errors import DGPUComputeError, DGPUInferenceCancelled
from skynet.utils import crop_image, convert_from_cv2_to_image, convert_from_image_to_cv2, convert_from_img_to_bytes, init_upscaler, pipeline_for
def prepare_params_for_diffuse( def prepare_params_for_diffuse(
params: dict, params: dict,
binary: bytes | None = None input_type: str,
binary = None
): ):
image = None
if binary:
image = convert_from_bytes_and_crop(binary, 512, 512)
_params = {} _params = {}
if image: if binary != None:
match input_type:
case 'png':
image = crop_image(
binary, params['width'], params['height'])
_params['image'] = image _params['image'] = image
_params['strength'] = float(params['strength']) _params['strength'] = float(params['strength'])
case 'none':
...
case _:
raise DGPUComputeError(f'Unknown input_type {input_type}')
else: else:
_params['width'] = int(params['width']) _params['width'] = int(params['width'])
_params['height'] = int(params['height']) _params['height'] = int(params['height'])
@ -51,6 +63,10 @@ class SkynetMM:
if 'initial_models' in config else DEFAULT_INITAL_MODELS if 'initial_models' in config else DEFAULT_INITAL_MODELS
) )
self.cache_dir = None
if 'hf_home' in config:
self.cache_dir = config['hf_home']
self._models = {} self._models = {}
for model in self.initial_models: for model in self.initial_models:
self.load_model(model, False, force=True) self.load_model(model, False, force=True)
@ -75,7 +91,9 @@ class SkynetMM:
): ):
logging.info(f'loading model {model_name}...') logging.info(f'loading model {model_name}...')
if force or len(self._models.keys()) == 0: if force or len(self._models.keys()) == 0:
pipe = pipeline_for(model_name, image=image) pipe = pipeline_for(
model_name, image=image, cache_dir=self.cache_dir)
self._models[model_name] = { self._models[model_name] = {
'pipe': pipe, 'pipe': pipe,
'generated': 0, 'generated': 0,
@ -97,7 +115,8 @@ class SkynetMM:
gc.collect() gc.collect()
torch.cuda.empty_cache() torch.cuda.empty_cache()
pipe = pipeline_for(model_name, image=image) pipe = pipeline_for(
model_name, image=image, cache_dir=self.cache_dir)
self._models[model_name] = { self._models[model_name] = {
'pipe': pipe, 'pipe': pipe,
@ -122,38 +141,61 @@ class SkynetMM:
def compute_one( def compute_one(
self, self,
request_id: int,
method: str, method: str,
params: dict, params: dict,
input_type: str = 'png',
binary: bytes | None = None binary: bytes | None = None
): ):
def maybe_cancel_work(step, *args, **kwargs):
if self._should_cancel:
should_raise = trio.from_thread.run(self._should_cancel, request_id)
if should_raise:
logging.warn(f'cancelling work at step {step}')
raise DGPUInferenceCancelled()
maybe_cancel_work(0)
output_type = 'png'
if 'output_type' in params:
output_type = params['output_type']
output = None
output_hash = None
try: try:
match method: match method:
case 'diffuse': case 'diffuse':
image = None arguments = prepare_params_for_diffuse(
params, input_type, binary=binary)
arguments = prepare_params_for_diffuse(params, binary)
prompt, guidance, step, seed, upscaler, extra_params = arguments prompt, guidance, step, seed, upscaler, extra_params = arguments
model = self.get_model(params['model'], 'image' in extra_params) model = self.get_model(params['model'], 'image' in extra_params)
image = model( output = model(
prompt, prompt,
guidance_scale=guidance, guidance_scale=guidance,
num_inference_steps=step, num_inference_steps=step,
generator=seed, generator=seed,
callback=maybe_cancel_work,
callback_steps=1,
**extra_params **extra_params
).images[0] ).images[0]
output_binary = b''
match output_type:
case 'png':
if upscaler == 'x4': if upscaler == 'x4':
input_img = image.convert('RGB') input_img = output.convert('RGB')
up_img, _ = self.upscaler.enhance( up_img, _ = self.upscaler.enhance(
convert_from_image_to_cv2(input_img), outscale=4) convert_from_image_to_cv2(input_img), outscale=4)
image = convert_from_cv2_to_image(up_img) output = convert_from_cv2_to_image(up_img)
img_raw = convert_from_img_to_bytes(image) output_binary = convert_from_img_to_bytes(output)
img_sha = sha256(img_raw).hexdigest()
return img_sha, img_raw case _:
raise DGPUComputeError(f'Unsupported output type: {output_type}')
output_hash = sha256(output_binary).hexdigest()
case _: case _:
raise DGPUComputeError('Unsupported compute method') raise DGPUComputeError('Unsupported compute method')
@ -164,3 +206,5 @@ class SkynetMM:
finally: finally:
torch.cuda.empty_cache() torch.cuda.empty_cache()
return output_hash, output

View File

@ -1,17 +1,35 @@
#!/usr/bin/python #!/usr/bin/python
import json import json
import random
import logging import logging
import time
import traceback import traceback
from hashlib import sha256 from hashlib import sha256
from datetime import datetime
from functools import partial
import trio import trio
from quart import jsonify
from quart_trio import QuartTrio as Quart
from skynet.constants import MODELS, VERSION
from skynet.dgpu.errors import *
from skynet.dgpu.compute import SkynetMM from skynet.dgpu.compute import SkynetMM
from skynet.dgpu.network import SkynetGPUConnector from skynet.dgpu.network import SkynetGPUConnector
def convert_reward_to_int(reward_str):
int_part, decimal_part = (
reward_str.split('.')[0],
reward_str.split('.')[1].split(' ')[0]
)
return int(int_part + decimal_part)
class SkynetDGPUDaemon: class SkynetDGPUDaemon:
def __init__( def __init__(
@ -27,27 +45,120 @@ class SkynetDGPUDaemon:
if 'auto_withdraw' in config else False if 'auto_withdraw' in config else False
) )
self.account = config['account']
self.non_compete = set()
if 'non_compete' in config:
self.non_compete = set(config['non_compete'])
self.model_whitelist = set()
if 'model_whitelist' in config:
self.model_whitelist = set(config['model_whitelist'])
self.model_blacklist = set()
if 'model_blacklist' in config:
self.model_blacklist = set(config['model_blacklist'])
self.backend = 'sync-on-thread'
if 'backend' in config:
self.backend = config['backend']
self._snap = {
'queue': [],
'requests': {},
'my_results': []
}
self._benchmark = []
self._last_benchmark = None
self._last_generation_ts = None
def _get_benchmark_speed(self) -> float:
if not self._last_benchmark:
return 0
start = self._last_benchmark[0]
end = self._last_benchmark[-1]
elapsed = end - start
its = len(self._last_benchmark)
speed = its / elapsed
logging.info(f'{elapsed} s total its: {its}, at {speed} it/s ')
return speed
async def should_cancel_work(self, request_id: int):
self._benchmark.append(time.time())
competitors = set([
status['worker']
for status in self._snap['requests'][request_id]
if status['worker'] != self.account
])
return bool(self.non_compete & competitors)
async def snap_updater_task(self):
while True:
self._snap = await self.conn.get_full_queue_snapshot()
await trio.sleep(1)
async def generate_api(self):
app = Quart(__name__)
@app.route('/')
async def health():
return jsonify(
account=self.account,
version=VERSION,
last_generation_ts=self._last_generation_ts,
last_generation_speed=self._get_benchmark_speed()
)
return app
async def serve_forever(self): async def serve_forever(self):
try: try:
while True: while True:
if self.auto_withdraw: if self.auto_withdraw:
await self.conn.maybe_withdraw_all() await self.conn.maybe_withdraw_all()
queue = await self.conn.get_work_requests_last_hour() queue = self._snap['queue']
random.shuffle(queue)
queue = sorted(
queue,
key=lambda req: convert_reward_to_int(req['reward']),
reverse=True
)
for req in queue: for req in queue:
rid = req['id'] rid = req['id']
my_results = [res['id'] for res in (await self.conn.find_my_results())]
if rid not in my_results:
statuses = await self.conn.get_status_by_request_id(rid)
if len(statuses) == 0:
# parse request # parse request
body = json.loads(req['body']) body = json.loads(req['body'])
model = body['params']['model']
binary = await self.conn.get_input_data(req['binary_data']) # if model not known
if model not in MODELS:
logging.warning(f'Unknown model {model}')
continue
# if whitelist enabled and model not in it continue
if (len(self.model_whitelist) > 0 and
not model in self.model_whitelist):
continue
# if blacklist contains model skip
if model in self.model_blacklist:
continue
my_results = [res['id'] for res in self._snap['my_results']]
if rid not in my_results and rid in self._snap['requests']:
statuses = self._snap['requests'][rid]
if len(statuses) == 0:
binary, input_type = await self.conn.get_input_data(req['binary_data'])
hash_str = ( hash_str = (
str(req['nonce']) str(req['nonce'])
@ -70,17 +181,40 @@ class SkynetDGPUDaemon:
else: else:
try: try:
img_sha, img_raw = self.mm.compute_one( output_type = 'png'
body['method'], body['params'], binary=binary) if 'output_type' in body['params']:
output_type = body['params']['output_type']
ipfs_hash = self.conn.publish_on_ipfs( img_raw) output = None
output_hash = None
match self.backend:
case 'sync-on-thread':
self.mm._should_cancel = self.should_cancel_work
output_hash, output = await trio.to_thread.run_sync(
partial(
self.mm.compute_one,
rid,
body['method'], body['params'],
input_type=input_type,
binary=binary
)
)
await self.conn.submit_work(rid, request_hash, img_sha, ipfs_hash) case _:
break raise DGPUComputeError(f'Unsupported backend {self.backend}')
self._last_generation_ts = datetime.now().isoformat()
self._last_benchmark = self._benchmark
self._benchmark = []
ipfs_hash = await self.conn.publish_on_ipfs(output, typ=output_type)
await self.conn.submit_work(rid, request_hash, output_hash, ipfs_hash)
except BaseException as e: except BaseException as e:
traceback.print_exc() traceback.print_exc()
await self.conn.cancel_work(rid, str(e)) await self.conn.cancel_work(rid, str(e))
finally:
break break
else: else:

View File

@ -3,3 +3,6 @@
class DGPUComputeError(BaseException): class DGPUComputeError(BaseException):
... ...
class DGPUInferenceCancelled(BaseException):
...

View File

@ -1,24 +1,28 @@
#!/usr/bin/python #!/usr/bin/python
from functools import partial
import io import io
import json import json
import time import time
import logging import logging
import asks from pathlib import Path
from PIL import Image from functools import partial
from contextlib import ExitStack import asks
from contextlib import asynccontextmanager as acm import trio
import anyio
from PIL import Image, UnidentifiedImageError
from leap.cleos import CLEOS from leap.cleos import CLEOS
from leap.sugar import Checksum256, Name, asset_from_str from leap.sugar import Checksum256, Name, asset_from_str
from skynet.constants import DEFAULT_DOMAIN from skynet.constants import DEFAULT_IPFS_DOMAIN
from skynet.ipfs import AsyncIPFSHTTP, get_ipfs_file
from skynet.dgpu.errors import DGPUComputeError from skynet.dgpu.errors import DGPUComputeError
from skynet.ipfs import get_ipfs_file
from skynet.ipfs.docker import open_ipfs_node
REQUEST_UPDATE_TIME = 3
async def failable(fn: partial, ret_fail=None): async def failable(fn: partial, ret_fail=None):
@ -26,8 +30,11 @@ async def failable(fn: partial, ret_fail=None):
return await fn() return await fn()
except ( except (
OSError,
json.JSONDecodeError,
asks.errors.RequestTimeout, asks.errors.RequestTimeout,
json.JSONDecodeError asks.errors.BadHttpResponse,
anyio.BrokenResourceError
): ):
return ret_fail return ret_fail
@ -38,28 +45,25 @@ class SkynetGPUConnector:
self.account = Name(config['account']) self.account = Name(config['account'])
self.permission = config['permission'] self.permission = config['permission']
self.key = config['key'] self.key = config['key']
self.node_url = config['node_url'] self.node_url = config['node_url']
self.hyperion_url = config['hyperion_url'] self.hyperion_url = config['hyperion_url']
self.ipfs_url = config['ipfs_url']
self.cleos = CLEOS( self.cleos = CLEOS(
None, None, self.node_url, remote=self.node_url) None, None, self.node_url, remote=self.node_url)
self._exit_stack = ExitStack() self.ipfs_gateway_url = None
if 'ipfs_gateway_url' in config:
self.ipfs_gateway_url = config['ipfs_gateway_url']
self.ipfs_url = config['ipfs_url']
def connect(self): self.ipfs_client = AsyncIPFSHTTP(self.ipfs_url)
self.ipfs_node = self._exit_stack.enter_context(
open_ipfs_node())
def disconnect(self): self.ipfs_domain = DEFAULT_IPFS_DOMAIN
self._exit_stack.close() if 'ipfs_domain' in config:
self.ipfs_domain = config['ipfs_domain']
@acm
async def open(self):
self.connect()
yield self
self.disconnect()
self._wip_requests = {}
# blockchain helpers # blockchain helpers
@ -110,6 +114,36 @@ class SkynetGPUConnector:
else: else:
return None return None
async def get_competitors_for_req(self, request_id: int) -> set:
competitors = [
status['worker']
for status in
(await self.get_status_by_request_id(request_id))
if status['worker'] != self.account
]
logging.info(f'competitors: {competitors}')
return set(competitors)
async def get_full_queue_snapshot(self):
snap = {
'requests': {},
'my_results': []
}
snap['queue'] = await self.get_work_requests_last_hour()
async def _run_and_save(d, key: str, fn, *args, **kwargs):
d[key] = await fn(*args, **kwargs)
async with trio.open_nursery() as n:
n.start_soon(_run_and_save, snap, 'my_results', self.find_my_results)
for req in snap['queue']:
n.start_soon(
_run_and_save, snap['requests'], req['id'], self.get_status_by_request_id, req['id'])
return snap
async def begin_work(self, request_id: int): async def begin_work(self, request_id: int):
logging.info('begin_work') logging.info('begin_work')
return await failable( return await failable(
@ -205,29 +239,74 @@ class SkynetGPUConnector:
) )
# IPFS helpers # IPFS helpers
async def publish_on_ipfs(self, raw, typ: str = 'png'):
def publish_on_ipfs(self, raw_img: bytes): Path('ipfs-staging').mkdir(exist_ok=True)
logging.info('publish_on_ipfs') logging.info('publish_on_ipfs')
img = Image.open(io.BytesIO(raw_img))
img.save(f'ipfs-docker-staging/image.png')
target_file = ''
match typ:
case 'png':
raw: Image
target_file = 'ipfs-staging/image.png'
raw.save(target_file)
case _:
raise ValueError(f'Unsupported output type: {typ}')
if self.ipfs_gateway_url:
# check peer connections, reconnect to skynet gateway if not # check peer connections, reconnect to skynet gateway if not
peers = self.ipfs_node.check_connect() gateway_id = Path(self.ipfs_gateway_url).name
if self.ipfs_url not in peers: peers = await self.ipfs_client.peers()
self.ipfs_node.connect(self.ipfs_url) if gateway_id not in [p['Peer'] for p in peers]:
await self.ipfs_client.connect(self.ipfs_gateway_url)
ipfs_hash = self.ipfs_node.add('image.png') file_info = await self.ipfs_client.add(Path(target_file))
file_cid = file_info['Hash']
self.ipfs_node.pin(ipfs_hash) await self.ipfs_client.pin(file_cid)
return ipfs_hash return file_cid
async def get_input_data(self, ipfs_hash: str) -> tuple[bytes, str]:
input_type = 'none'
async def get_input_data(self, ipfs_hash: str) -> bytes:
if ipfs_hash == '': if ipfs_hash == '':
return b'' return b'', input_type
resp = await get_ipfs_file(f'https://ipfs.{DEFAULT_DOMAIN}/ipfs/{ipfs_hash}/image.png') results = {}
if not resp: ipfs_link = f'https://{self.ipfs_domain}/ipfs/{ipfs_hash}'
ipfs_link_legacy = ipfs_link + '/image.png'
async with trio.open_nursery() as n:
async def get_and_set_results(link: str):
res = await get_ipfs_file(link, timeout=1)
logging.info(f'got response from {link}')
if not res or res.status_code != 200:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
else:
try:
# attempt to decode as image
results[link] = Image.open(io.BytesIO(res.raw))
input_type = 'png'
n.cancel_scope.cancel()
except UnidentifiedImageError:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
n.start_soon(
get_and_set_results, ipfs_link)
n.start_soon(
get_and_set_results, ipfs_link_legacy)
input_data = None
if ipfs_link_legacy in results:
input_data = results[ipfs_link_legacy]
if ipfs_link in results:
input_data = results[ipfs_link]
if input_data == None:
raise DGPUComputeError('Couldn\'t gather input data from ipfs') raise DGPUComputeError('Couldn\'t gather input data from ipfs')
return resp.raw return input_data, input_type

View File

@ -1,5 +1,7 @@
#!/usr/bin/python #!/usr/bin/python
import random
from ..constants import * from ..constants import *
@ -15,10 +17,14 @@ class ConfigUnknownAlgorithm(BaseException):
class ConfigUnknownUpscaler(BaseException): class ConfigUnknownUpscaler(BaseException):
... ...
class ConfigUnknownAutoConfSetting(BaseException):
...
class ConfigSizeDivisionByEight(BaseException): class ConfigSizeDivisionByEight(BaseException):
... ...
def validate_user_config_request(req: str): def validate_user_config_request(req: str):
params = req.split(' ') params = req.split(' ')
@ -78,6 +84,18 @@ def validate_user_config_request(req: str):
raise ConfigUnknownUpscaler( raise ConfigUnknownUpscaler(
f'\"{val}\" is not a valid upscaler') f'\"{val}\" is not a valid upscaler')
case 'autoconf':
val = params[2]
if val == 'on':
val = True
elif val == 'off':
val = False
else:
raise ConfigUnknownAutoConfSetting(
f'\"{val}\" not a valid setting for autoconf')
case _: case _:
raise ConfigUnknownAttribute( raise ConfigUnknownAttribute(
f'\"{attr}\" not a configurable parameter') f'\"{attr}\" not a configurable parameter')
@ -92,3 +110,22 @@ def validate_user_config_request(req: str):
except ValueError: except ValueError:
raise ValueError(f'\"{val}\" is not a number silly') raise ValueError(f'\"{val}\" is not a number silly')
def perform_auto_conf(config: dict) -> dict:
model = config['model']
prefered_size_w = 512
prefered_size_h = 512
if 'xl' in model:
prefered_size_w = 1024
prefered_size_h = 1024
else:
prefered_size_w = 512
prefered_size_h = 512
config['step'] = random.randint(20, 35)
config['width'] = prefered_size_w
config['height'] = prefered_size_h
return config

View File

@ -19,9 +19,8 @@ from leap.hyperion import HyperionAPI
import discord import discord
import io import io
from skynet.db import open_new_database, open_database_connection from skynet.db import open_database_connection
from skynet.ipfs import get_ipfs_file from skynet.ipfs import get_ipfs_file, AsyncIPFSHTTP
from skynet.ipfs.docker import open_ipfs_node
from skynet.constants import * from skynet.constants import *
from . import * from . import *
@ -44,8 +43,11 @@ class SkynetDiscordFrontend:
db_host: str, db_host: str,
db_user: str, db_user: str,
db_pass: str, db_pass: str,
ipfs_url: str,
remote_ipfs_node: str, remote_ipfs_node: str,
key: str key: str,
explorer_domain: str,
ipfs_domain: str
): ):
# self.token = token # self.token = token
self.account = account self.account = account
@ -55,23 +57,23 @@ class SkynetDiscordFrontend:
self.db_host = db_host self.db_host = db_host
self.db_user = db_user self.db_user = db_user
self.db_pass = db_pass self.db_pass = db_pass
self.ipfs_url = ipfs_url
self.remote_ipfs_node = remote_ipfs_node self.remote_ipfs_node = remote_ipfs_node
self.key = key self.key = key
self.explorer_domain = explorer_domain
self.ipfs_domain = ipfs_domain
self.bot = DiscordBot(self) self.bot = DiscordBot(self)
self.cleos = CLEOS(None, None, url=node_url, remote=node_url) self.cleos = CLEOS(None, None, url=node_url, remote=node_url)
self.hyperion = HyperionAPI(hyperion_url) self.hyperion = HyperionAPI(hyperion_url)
self.ipfs_node = AsyncIPFSHTTP(ipfs_node)
self._exit_stack = ExitStack() self._exit_stack = ExitStack()
self._async_exit_stack = AsyncExitStack() self._async_exit_stack = AsyncExitStack()
async def start(self): async def start(self):
self.ipfs_node = self._exit_stack.enter_context( if self.remote_ipfs_node:
open_ipfs_node()) await self.ipfs_node.connect(self.remote_ipfs_node)
self.ipfs_node.connect(self.remote_ipfs_node)
logging.info(
f'connected to remote ipfs node: {self.remote_ipfs_node}')
self.db_call = await self._async_exit_stack.enter_async_context( self.db_call = await self._async_exit_stack.enter_async_context(
open_database_connection( open_database_connection(
@ -121,7 +123,7 @@ class SkynetDiscordFrontend:
ctx: discord.ext.commands.context.Context | discord.Message, ctx: discord.ext.commands.context.Context | discord.Message,
file_id: str | None = None, file_id: str | None = None,
binary_data: str = '' binary_data: str = ''
): ) -> bool:
send = ctx.channel.send send = ctx.channel.send
if params['seed'] == None: if params['seed'] == None:
@ -168,10 +170,10 @@ class SkynetDiscordFrontend:
await self.bot.channel.send( await self.bot.channel.send(
status_msg, status_msg,
'skynet has suffered an internal error trying to fill this request') 'skynet has suffered an internal error trying to fill this request')
return return False
enqueue_tx_id = res['transaction_id'] enqueue_tx_id = res['transaction_id']
enqueue_tx_link = f'[**Your request on Skynet Explorer**](https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{enqueue_tx_id})' enqueue_tx_link = f'[**Your request on Skynet Explorer**](https://{self.explorer_domain}/v2/explore/transaction/{enqueue_tx_id})'
msg_text += f'**broadcasted!** \n{enqueue_tx_link}\n[{timestamp_pretty()}] *workers are processing request...* ' msg_text += f'**broadcasted!** \n{enqueue_tx_link}\n[{timestamp_pretty()}] *workers are processing request...* '
embed = discord.Embed( embed = discord.Embed(
@ -230,7 +232,7 @@ class SkynetDiscordFrontend:
color=discord.Color.blue()) color=discord.Color.blue())
await message.edit(embed=embed) await message.edit(embed=embed)
return return False
tx_link = f'[**Your result on Skynet Explorer**](https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{tx_hash})' tx_link = f'[**Your result on Skynet Explorer**](https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{tx_hash})'
@ -243,8 +245,48 @@ class SkynetDiscordFrontend:
await message.edit(embed=embed) await message.edit(embed=embed)
# attempt to get the image and send it # attempt to get the image and send it
ipfs_link = f'https://ipfs.{DEFAULT_DOMAIN}/ipfs/{ipfs_hash}/image.png' results = {}
resp = await get_ipfs_file(ipfs_link) ipfs_link = f'https://{self.ipfs_domain}/ipfs/{ipfs_hash}'
ipfs_link_legacy = ipfs_link + '/image.png'
async def get_and_set_results(link: str):
res = await get_ipfs_file(link)
logging.info(f'got response from {link}')
if not res or res.status_code != 200:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
else:
try:
with Image.open(io.BytesIO(res.raw)) as image:
tmp_buf = io.BytesIO()
image.save(tmp_buf, format='PNG')
png_img = tmp_buf.getvalue()
results[link] = png_img
except UnidentifiedImageError:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
tasks = [
get_and_set_results(ipfs_link),
get_and_set_results(ipfs_link_legacy)
]
await asyncio.gather(*tasks)
png_img = None
if ipfs_link_legacy in results:
png_img = results[ipfs_link_legacy]
if ipfs_link in results:
png_img = results[ipfs_link]
if not png_img:
await self.update_status_message(
status_msg,
caption,
reply_markup=build_redo_menu(),
parse_mode='HTML'
)
return True
# reword this function, may not need caption # reword this function, may not need caption
caption, embed = generate_reply_caption( caption, embed = generate_reply_caption(
@ -265,3 +307,5 @@ class SkynetDiscordFrontend:
else: # txt2img else: # txt2img
embed.set_image(url=ipfs_link) embed.set_image(url=ipfs_link)
await send(embed=embed, view=SkynetView(self)) await send(embed=embed, view=SkynetView(self))
return True

View File

@ -115,9 +115,9 @@ def create_handler_context(frontend: 'SkynetDiscordFrontend'):
await db_call( await db_call(
'update_user_stats', user.id, 'txt2img', last_prompt=prompt) 'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
ec = await work_request(user, status_msg, 'txt2img', params, ctx) success = await work_request(user, status_msg, 'txt2img', params, ctx)
if ec == None: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)
@bot.command(name='redo', help='Redo last request') @bot.command(name='redo', help='Redo last request')
@ -153,13 +153,13 @@ def create_handler_context(frontend: 'SkynetDiscordFrontend'):
**user_config **user_config
} }
ec = await work_request( success = await work_request(
user, status_msg, 'redo', params, ctx, user, status_msg, 'redo', params, ctx,
file_id=file_id, file_id=file_id,
binary_data=binary binary_data=binary
) )
if ec == None: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)
@bot.command(name='img2img', help='Responds with an image') @bot.command(name='img2img', help='Responds with an image')
@ -217,10 +217,12 @@ def create_handler_context(frontend: 'SkynetDiscordFrontend'):
image.thumbnail((512, 512)) image.thumbnail((512, 512))
logging.warning(f'resized it to {image.size}') logging.warning(f'resized it to {image.size}')
image.save(f'ipfs-docker-staging/image.png', format='PNG') image_loc = 'ipfs-staging/image.png'
image.save(image_loc, format='PNG')
ipfs_hash = ipfs_node.add('image.png') ipfs_info = await ipfs_node.add(image_loc)
ipfs_node.pin(ipfs_hash) ipfs_hash = ipfs_info['Hash']
await ipfs_node.pin(ipfs_hash)
logging.info(f'published input image {ipfs_hash} on ipfs') logging.info(f'published input image {ipfs_hash} on ipfs')
@ -243,13 +245,13 @@ def create_handler_context(frontend: 'SkynetDiscordFrontend'):
last_binary=ipfs_hash last_binary=ipfs_hash
) )
ec = await work_request( success = await work_request(
user, status_msg, 'img2img', params, ctx, user, status_msg, 'img2img', params, ctx,
file_id=file_id, file_id=file_id,
binary_data=ipfs_hash binary_data=ipfs_hash
) )
if ec == None: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)

View File

@ -63,9 +63,9 @@ class Txt2ImgButton(discord.ui.Button):
await db_call( await db_call(
'update_user_stats', user.id, 'txt2img', last_prompt=prompt) 'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
ec = await work_request(user, status_msg, 'txt2img', params, msg) success = await work_request(user, status_msg, 'txt2img', params, msg)
if ec == None: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)
@ -145,13 +145,13 @@ class Img2ImgButton(discord.ui.Button):
last_binary=ipfs_hash last_binary=ipfs_hash
) )
ec = await work_request( success = await work_request(
user, status_msg, 'img2img', params, msg, user, status_msg, 'img2img', params, msg,
file_id=file_id, file_id=file_id,
binary_data=ipfs_hash binary_data=ipfs_hash
) )
if ec == None: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)
@ -195,13 +195,13 @@ class RedoButton(discord.ui.Button):
'prompt': prompt, 'prompt': prompt,
**user_config **user_config
} }
ec = await work_request( success = await work_request(
user, status_msg, 'redo', params, interaction, user, status_msg, 'redo', params, interaction,
file_id=file_id, file_id=file_id,
binary_data=binary binary_data=binary
) )
if ec == None: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)

View File

@ -81,11 +81,12 @@ def generate_reply_caption(
params: dict, params: dict,
tx_hash: str, tx_hash: str,
worker: str, worker: str,
reward: str reward: str,
explorer_domain: str
): ):
explorer_link = discord.Embed( explorer_link = discord.Embed(
title='[SKYNET Transaction Explorer]', title='[SKYNET Transaction Explorer]',
url=f'https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{tx_hash}', url=f'https://{explorer_domain}/v2/explore/transaction/{tx_hash}',
color=discord.Color.blue()) color=discord.Color.blue())
meta_info = prepare_metainfo_caption(user, worker, reward, params, explorer_link) meta_info = prepare_metainfo_caption(user, worker, reward, params, explorer_link)

View File

@ -1,26 +1,27 @@
#!/usr/bin/python #!/usr/bin/python
from json import JSONDecodeError import io
import random import random
import logging import logging
import asyncio import asyncio
from PIL import Image, UnidentifiedImageError
from json import JSONDecodeError
from decimal import Decimal from decimal import Decimal
from hashlib import sha256 from hashlib import sha256
from datetime import datetime from datetime import datetime
from contextlib import ExitStack, AsyncExitStack from contextlib import AsyncExitStack
from contextlib import asynccontextmanager as acm from contextlib import asynccontextmanager as acm
from leap.cleos import CLEOS from leap.cleos import CLEOS
from leap.sugar import Name, asset_from_str, collect_stdout from leap.sugar import Name, asset_from_str, collect_stdout
from leap.hyperion import HyperionAPI from leap.hyperion import HyperionAPI
from telebot.types import InputMediaPhoto
from telebot.types import InputMediaPhoto
from telebot.async_telebot import AsyncTeleBot from telebot.async_telebot import AsyncTeleBot
from skynet.db import open_new_database, open_database_connection from skynet.db import open_database_connection
from skynet.ipfs import get_ipfs_file from skynet.ipfs import get_ipfs_file, AsyncIPFSHTTP
from skynet.ipfs.docker import open_ipfs_node
from skynet.constants import * from skynet.constants import *
from . import * from . import *
@ -41,8 +42,11 @@ class SkynetTelegramFrontend:
db_host: str, db_host: str,
db_user: str, db_user: str,
db_pass: str, db_pass: str,
remote_ipfs_node: str, ipfs_node: str,
key: str remote_ipfs_node: str | None,
key: str,
explorer_domain: str,
ipfs_domain: str
): ):
self.token = token self.token = token
self.account = account self.account = account
@ -54,21 +58,19 @@ class SkynetTelegramFrontend:
self.db_pass = db_pass self.db_pass = db_pass
self.remote_ipfs_node = remote_ipfs_node self.remote_ipfs_node = remote_ipfs_node
self.key = key self.key = key
self.explorer_domain = explorer_domain
self.ipfs_domain = ipfs_domain
self.bot = AsyncTeleBot(token, exception_handler=SKYExceptionHandler) self.bot = AsyncTeleBot(token, exception_handler=SKYExceptionHandler)
self.cleos = CLEOS(None, None, url=node_url, remote=node_url) self.cleos = CLEOS(None, None, url=node_url, remote=node_url)
self.hyperion = HyperionAPI(hyperion_url) self.hyperion = HyperionAPI(hyperion_url)
self.ipfs_node = AsyncIPFSHTTP(ipfs_node)
self._exit_stack = ExitStack()
self._async_exit_stack = AsyncExitStack() self._async_exit_stack = AsyncExitStack()
async def start(self): async def start(self):
self.ipfs_node = self._exit_stack.enter_context( if self.remote_ipfs_node:
open_ipfs_node()) await self.ipfs_node.connect(self.remote_ipfs_node)
self.ipfs_node.connect(self.remote_ipfs_node)
logging.info(
f'connected to remote ipfs node: {self.remote_ipfs_node}')
self.db_call = await self._async_exit_stack.enter_async_context( self.db_call = await self._async_exit_stack.enter_async_context(
open_database_connection( open_database_connection(
@ -78,7 +80,6 @@ class SkynetTelegramFrontend:
async def stop(self): async def stop(self):
await self._async_exit_stack.aclose() await self._async_exit_stack.aclose()
self._exit_stack.close()
@acm @acm
async def open(self): async def open(self):
@ -116,7 +117,7 @@ class SkynetTelegramFrontend:
params: dict, params: dict,
file_id: str | None = None, file_id: str | None = None,
binary_data: str = '' binary_data: str = ''
): ) -> bool:
if params['seed'] == None: if params['seed'] == None:
params['seed'] = random.randint(0, 0xFFFFFFFF) params['seed'] = random.randint(0, 0xFFFFFFFF)
@ -159,12 +160,12 @@ class SkynetTelegramFrontend:
await self.update_status_message( await self.update_status_message(
status_msg, status_msg,
'skynet has suffered an internal error trying to fill this request') 'skynet has suffered an internal error trying to fill this request')
return return False
enqueue_tx_id = res['transaction_id'] enqueue_tx_id = res['transaction_id']
enqueue_tx_link = hlink( enqueue_tx_link = hlink(
'Your request on Skynet Explorer', 'Your request on Skynet Explorer',
f'https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{enqueue_tx_id}' f'https://{self.explorer_domain}/v2/explore/transaction/{enqueue_tx_id}'
) )
await self.append_status_message( await self.append_status_message(
@ -221,11 +222,11 @@ class SkynetTelegramFrontend:
f'\n[{timestamp_pretty()}] <b>timeout processing request</b>', f'\n[{timestamp_pretty()}] <b>timeout processing request</b>',
parse_mode='HTML' parse_mode='HTML'
) )
return return False
tx_link = hlink( tx_link = hlink(
'Your result on Skynet Explorer', 'Your result on Skynet Explorer',
f'https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{tx_hash}' f'https://{self.explorer_domain}/v2/explore/transaction/{tx_hash}'
) )
await self.append_status_message( await self.append_status_message(
@ -236,23 +237,60 @@ class SkynetTelegramFrontend:
parse_mode='HTML' parse_mode='HTML'
) )
# attempt to get the image and send it
ipfs_link = f'https://ipfs.{DEFAULT_DOMAIN}/ipfs/{ipfs_hash}/image.png'
resp = await get_ipfs_file(ipfs_link)
caption = generate_reply_caption( caption = generate_reply_caption(
user, params, tx_hash, worker, reward) user, params, tx_hash, worker, reward, self.explorer_domain)
if not resp or resp.status_code != 200: # attempt to get the image and send it
logging.error(f'couldn\'t get ipfs hosted image at {ipfs_link}!') results = {}
ipfs_link = f'https://{self.ipfs_domain}/ipfs/{ipfs_hash}'
ipfs_link_legacy = ipfs_link + '/image.png'
async def get_and_set_results(link: str):
res = await get_ipfs_file(link)
logging.info(f'got response from {link}')
if not res or res.status_code != 200:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
else:
try:
with Image.open(io.BytesIO(res.raw)) as image:
w, h = image.size
if w > TG_MAX_WIDTH or h > TG_MAX_HEIGHT:
logging.warning(f'result is of size {image.size}')
image.thumbnail((TG_MAX_WIDTH, TG_MAX_HEIGHT))
tmp_buf = io.BytesIO()
image.save(tmp_buf, format='PNG')
png_img = tmp_buf.getvalue()
results[link] = png_img
except UnidentifiedImageError:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
tasks = [
get_and_set_results(ipfs_link),
get_and_set_results(ipfs_link_legacy)
]
await asyncio.gather(*tasks)
png_img = None
if ipfs_link_legacy in results:
png_img = results[ipfs_link_legacy]
if ipfs_link in results:
png_img = results[ipfs_link]
if not png_img:
await self.update_status_message( await self.update_status_message(
status_msg, status_msg,
caption, caption,
reply_markup=build_redo_menu(), reply_markup=build_redo_menu(),
parse_mode='HTML' parse_mode='HTML'
) )
return True
else:
logging.info(f'success! sending generated image') logging.info(f'success! sending generated image')
await self.bot.delete_message( await self.bot.delete_message(
chat_id=status_msg.chat.id, message_id=status_msg.id) chat_id=status_msg.chat.id, message_id=status_msg.id)
@ -262,7 +300,7 @@ class SkynetTelegramFrontend:
media=[ media=[
InputMediaPhoto(file_id), InputMediaPhoto(file_id),
InputMediaPhoto( InputMediaPhoto(
resp.raw, png_img,
caption=caption, caption=caption,
parse_mode='HTML' parse_mode='HTML'
) )
@ -273,7 +311,9 @@ class SkynetTelegramFrontend:
await self.bot.send_photo( await self.bot.send_photo(
status_msg.chat.id, status_msg.chat.id,
caption=caption, caption=caption,
photo=resp.raw, photo=png_img,
reply_markup=build_redo_menu(), reply_markup=build_redo_menu(),
parse_mode='HTML' parse_mode='HTML'
) )
return True

View File

@ -9,7 +9,7 @@ from datetime import datetime, timedelta
from PIL import Image from PIL import Image
from telebot.types import CallbackQuery, Message from telebot.types import CallbackQuery, Message
from skynet.frontend import validate_user_config_request from skynet.frontend import validate_user_config_request, perform_auto_conf
from skynet.constants import * from skynet.constants import *
@ -118,6 +118,9 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
user = message.from_user user = message.from_user
chat = message.chat chat = message.chat
if chat.type == 'private':
return
reply_id = None reply_id = None
if chat.type == 'group' and chat.id == GROUP_ID: if chat.type == 'group' and chat.id == GROUP_ID:
reply_id = message.message_id reply_id = message.message_id
@ -146,6 +149,9 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
user_config = {**user_row} user_config = {**user_row}
del user_config['id'] del user_config['id']
if user_config['autoconf']:
user_config = perform_auto_conf(user_config)
params = { params = {
'prompt': prompt, 'prompt': prompt,
**user_config **user_config
@ -154,9 +160,9 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
await db_call( await db_call(
'update_user_stats', user.id, 'txt2img', last_prompt=prompt) 'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
ec = await work_request(user, status_msg, 'txt2img', params) success = await work_request(user, status_msg, 'txt2img', params)
if ec == 0: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)
@ -174,6 +180,9 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
user = message.from_user user = message.from_user
chat = message.chat chat = message.chat
if chat.type == 'private':
return
reply_id = None reply_id = None
if chat.type == 'group' and chat.id == GROUP_ID: if chat.type == 'group' and chat.id == GROUP_ID:
reply_id = message.message_id reply_id = message.message_id
@ -203,26 +212,31 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
file_path = (await bot.get_file(file_id)).file_path file_path = (await bot.get_file(file_id)).file_path
image_raw = await bot.download_file(file_path) image_raw = await bot.download_file(file_path)
user_config = {**user_row}
del user_config['id']
if user_config['autoconf']:
user_config = perform_auto_conf(user_config)
with Image.open(io.BytesIO(image_raw)) as image: with Image.open(io.BytesIO(image_raw)) as image:
w, h = image.size w, h = image.size
if w > 512 or h > 512: if w > user_config['width'] or h > user_config['height']:
logging.warning(f'user sent img of size {image.size}') logging.warning(f'user sent img of size {image.size}')
image.thumbnail((512, 512)) image.thumbnail(
(user_config['width'], user_config['height']))
logging.warning(f'resized it to {image.size}') logging.warning(f'resized it to {image.size}')
image.save(f'ipfs-docker-staging/image.png', format='PNG') image_loc = 'ipfs-staging/image.png'
image.save(image_loc, format='PNG')
ipfs_hash = ipfs_node.add('image.png') ipfs_info = await ipfs_node.add(image_loc)
ipfs_node.pin(ipfs_hash) ipfs_hash = ipfs_info['Hash']
await ipfs_node.pin(ipfs_hash)
logging.info(f'published input image {ipfs_hash} on ipfs') logging.info(f'published input image {ipfs_hash} on ipfs')
logging.info(f'mid: {message.id}') logging.info(f'mid: {message.id}')
user_config = {**user_row}
del user_config['id']
params = { params = {
'prompt': prompt, 'prompt': prompt,
**user_config **user_config
@ -237,13 +251,13 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
last_binary=ipfs_hash last_binary=ipfs_hash
) )
ec = await work_request( success = await work_request(
user, status_msg, 'img2img', params, user, status_msg, 'img2img', params,
file_id=file_id, file_id=file_id,
binary_data=ipfs_hash binary_data=ipfs_hash
) )
if ec == 0: if success:
await db_call('increment_generated', user.id) await db_call('increment_generated', user.id)
@ -263,6 +277,9 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
user = message.from_user user = message.from_user
chat = message.chat chat = message.chat
if chat.type == 'private':
return
init_msg = 'started processing redo request...' init_msg = 'started processing redo request...'
if is_query: if is_query:
status_msg = await bot.send_message(chat.id, init_msg) status_msg = await bot.send_message(chat.id, init_msg)
@ -292,18 +309,23 @@ def create_handler_context(frontend: 'SkynetTelegramFrontend'):
'new_user_request', user.id, message.id, status_msg.id, status=init_msg) 'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
user_config = {**user_row} user_config = {**user_row}
del user_config['id'] del user_config['id']
if user_config['autoconf']:
user_config = perform_auto_conf(user_config)
params = { params = {
'prompt': prompt, 'prompt': prompt,
**user_config **user_config
} }
await work_request( success = await work_request(
user, status_msg, 'redo', params, user, status_msg, 'redo', params,
file_id=file_id, file_id=file_id,
binary_data=binary binary_data=binary
) )
if success:
await db_call('increment_generated', user.id)
# "proxy" handlers just request routers # "proxy" handlers just request routers

View File

@ -67,11 +67,12 @@ def generate_reply_caption(
params: dict, params: dict,
tx_hash: str, tx_hash: str,
worker: str, worker: str,
reward: str reward: str,
explorer_domain: str
): ):
explorer_link = hlink( explorer_link = hlink(
'SKYNET Transaction Explorer', 'SKYNET Transaction Explorer',
f'https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{tx_hash}' f'https://explorer.{explorer_domain}/v2/explore/transaction/{tx_hash}'
) )
meta_info = prepare_metainfo_caption(tguser, worker, reward, params) meta_info = prepare_metainfo_caption(tguser, worker, reward, params)

View File

@ -1,33 +1,64 @@
#!/usr/bin/python #!/usr/bin/python
import logging import logging
from pathlib import Path
import asks import asks
import requests
class IPFSHTTP: class IPFSClientException(BaseException):
...
class AsyncIPFSHTTP:
def __init__(self, endpoint: str): def __init__(self, endpoint: str):
self.endpoint = endpoint self.endpoint = endpoint
def pin(self, cid: str): async def _post(self, sub_url: str, *args, **kwargs):
return requests.post( resp = await asks.post(
f'{self.endpoint}/api/v0/pin/add', self.endpoint + sub_url,
params={'arg': cid} *args, **kwargs
) )
async def a_pin(self, cid: str): if resp.status_code != 200:
return await asks.post( raise IPFSClientException(resp.text)
f'{self.endpoint}/api/v0/pin/add',
params={'arg': cid} return resp.json()
async def add(self, file_path: Path, **kwargs):
files = {
'file': file_path
}
return await self._post(
'/api/v0/add',
files=files,
params=kwargs
) )
async def pin(self, cid: str):
return (await self._post(
'/api/v0/pin/add',
params={'arg': cid}
))['Pins']
async def get_ipfs_file(ipfs_link: str): async def connect(self, multi_addr: str):
return await self._post(
'/api/v0/swarm/connect',
params={'arg': multi_addr}
)
async def peers(self, **kwargs):
return (await self._post(
'/api/v0/swarm/peers',
params=kwargs
))['Peers']
async def get_ipfs_file(ipfs_link: str, timeout: int = 60):
logging.info(f'attempting to get image at {ipfs_link}') logging.info(f'attempting to get image at {ipfs_link}')
resp = None resp = None
for i in range(20): for i in range(timeout):
try: try:
resp = await asks.get(ipfs_link, timeout=3) resp = await asks.get(ipfs_link, timeout=3)

View File

@ -1,6 +1,5 @@
#!/usr/bin/python #!/usr/bin/python
import os
import sys import sys
import logging import logging
@ -10,61 +9,24 @@ from contextlib import contextmanager as cm
import docker import docker
from docker.types import Mount from docker.types import Mount
from docker.models.containers import Container
class IPFSDocker:
def __init__(self, container: Container):
self._container = container
def add(self, file: str) -> str:
ec, out = self._container.exec_run(
['ipfs', 'add', '-w', f'/export/{file}', '-Q'])
if ec != 0:
logging.error(out)
assert ec == 0
return out.decode().rstrip()
def pin(self, ipfs_hash: str):
ec, _ = self._container.exec_run(
['ipfs', 'pin', 'add', ipfs_hash])
assert ec == 0
def connect(self, remote_node: str):
ec, out = self._container.exec_run(
['ipfs', 'swarm', 'connect', remote_node])
if ec != 0:
logging.error(out)
assert ec == 0
def check_connect(self):
ec, out = self._container.exec_run(
['ipfs', 'swarm', 'peers'])
if ec != 0:
logging.error(out)
assert ec == 0
return out.splitlines()
@cm @cm
def open_ipfs_node(name='skynet-ipfs'): def open_ipfs_node(
name: str = 'skynet-ipfs',
teardown: bool = False,
peers: list[str] = []
):
dclient = docker.from_env() dclient = docker.from_env()
container = None
try: try:
container = dclient.containers.get(name) container = dclient.containers.get(name)
except docker.errors.NotFound: except docker.errors.NotFound:
staging_dir = Path().resolve() / 'ipfs-docker-staging'
staging_dir.mkdir(parents=True, exist_ok=True)
data_dir = Path().resolve() / 'ipfs-docker-data' data_dir = Path().resolve() / 'ipfs-docker-data'
data_dir.mkdir(parents=True, exist_ok=True) data_dir.mkdir(parents=True, exist_ok=True)
export_target = '/export'
data_target = '/data/ipfs' data_target = '/data/ipfs'
container = dclient.containers.run( container = dclient.containers.run(
@ -76,19 +38,15 @@ def open_ipfs_node(name='skynet-ipfs'):
'5001/tcp': ('127.0.0.1', 5001) '5001/tcp': ('127.0.0.1', 5001)
}, },
mounts=[ mounts=[
Mount(export_target, str(staging_dir), 'bind'),
Mount(data_target, str(data_dir), 'bind') Mount(data_target, str(data_dir), 'bind')
], ],
detach=True, detach=True,
remove=True remove=True
) )
uid, gid = 1000, 1000
if sys.platform != 'win32': if sys.platform != 'win32':
uid = os.getuid()
gid = os.getgid()
ec, out = container.exec_run(['chown', f'{uid}:{gid}', '-R', export_target])
logging.info(out)
assert ec == 0
ec, out = container.exec_run(['chown', f'{uid}:{gid}', '-R', data_target]) ec, out = container.exec_run(['chown', f'{uid}:{gid}', '-R', data_target])
logging.info(out) logging.info(out)
assert ec == 0 assert ec == 0
@ -99,4 +57,13 @@ def open_ipfs_node(name='skynet-ipfs'):
if 'Daemon is ready' in log: if 'Daemon is ready' in log:
break break
yield IPFSDocker(container) for peer in peers:
ec, out = container.exec_run(
['ipfs', 'swarm', 'connect', peer])
if ec != 0:
logging.error(out)
yield
if teardown and container:
container.stop()

View File

@ -9,7 +9,7 @@ import trio
from leap.hyperion import HyperionAPI from leap.hyperion import HyperionAPI
from . import IPFSHTTP from . import AsyncIPFSHTTP
MAX_TIME = timedelta(seconds=20) MAX_TIME = timedelta(seconds=20)
@ -20,7 +20,7 @@ class SkynetPinner:
def __init__( def __init__(
self, self,
hyperion: HyperionAPI, hyperion: HyperionAPI,
ipfs_http: IPFSHTTP ipfs_http: AsyncIPFSHTTP
): ):
self.hyperion = hyperion self.hyperion = hyperion
self.ipfs_http = ipfs_http self.ipfs_http = ipfs_http
@ -85,9 +85,9 @@ class SkynetPinner:
for _ in range(6): for _ in range(6):
try: try:
with trio.move_on_after(5): with trio.move_on_after(5):
resp = await self.ipfs_http.a_pin(cid) pins = await self.ipfs_http.pin(cid)
if resp.status_code != 200: if cid not in pins:
logging.error(f'error pinning {cid}:\n{resp.text}') logging.error(f'error pinning {cid}')
del self._pinned[cid] del self._pinned[cid]
else: else:

View File

@ -4,44 +4,12 @@ import json
import time import time
import logging import logging
from datetime import datetime
from contextlib import contextmanager as cm from contextlib import contextmanager as cm
import docker import docker
from pytz import timezone from leap.cleos import CLEOS
from leap.cleos import CLEOS, default_nodeos_image from leap.sugar import get_container, Symbol
from leap.sugar import get_container, Symbol, random_string
@cm
def open_cleos(
node_url: str,
key: str | None
):
vtestnet = None
try:
dclient = docker.from_env()
vtestnet = get_container(
dclient,
default_nodeos_image(),
name=f'skynet-wallet-{random_string(size=8)}',
force_unique=True,
detach=True,
network='host',
remove=True)
cleos = CLEOS(dclient, vtestnet, url=node_url, remote=node_url)
if key:
cleos.setup_wallet(key)
yield cleos
finally:
if vtestnet:
vtestnet.stop()
@cm @cm

View File

@ -2,11 +2,14 @@
import io import io
import os import os
import sys
import time import time
import random import random
import logging
from typing import Optional from typing import Optional
from pathlib import Path from pathlib import Path
import asks
import torch import torch
import numpy as np import numpy as np
@ -15,12 +18,11 @@ from PIL import Image
from basicsr.archs.rrdbnet_arch import RRDBNet from basicsr.archs.rrdbnet_arch import RRDBNet
from diffusers import ( from diffusers import (
DiffusionPipeline, DiffusionPipeline,
StableDiffusionPipeline,
StableDiffusionImg2ImgPipeline,
EulerAncestralDiscreteScheduler EulerAncestralDiscreteScheduler
) )
from realesrgan import RealESRGANer from realesrgan import RealESRGANer
from huggingface_hub import login from huggingface_hub import login
import trio
from .constants import MODELS from .constants import MODELS
@ -49,19 +51,23 @@ def convert_from_img_to_bytes(image: Image, fmt='PNG') -> bytes:
return byte_arr.getvalue() return byte_arr.getvalue()
def convert_from_bytes_and_crop(raw: bytes, max_w: int, max_h: int) -> Image: def crop_image(image: Image, max_w: int, max_h: int) -> Image:
image = convert_from_bytes_to_img(raw)
w, h = image.size w, h = image.size
if w > max_w or h > max_h: if w > max_w or h > max_h:
image.thumbnail((512, 512)) image.thumbnail((max_w, max_h))
return image.convert('RGB') return image.convert('RGB')
def pipeline_for(model: str, mem_fraction: float = 1.0, image=False) -> DiffusionPipeline: def pipeline_for(
model: str,
mem_fraction: float = 1.0,
image: bool = False,
cache_dir: str | None = None
) -> DiffusionPipeline:
assert torch.cuda.is_available() assert torch.cuda.is_available()
torch.cuda.empty_cache() torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(mem_fraction)
torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True
@ -72,31 +78,54 @@ def pipeline_for(model: str, mem_fraction: float = 1.0, image=False) -> Diffusio
torch.backends.cudnn.benchmark = False torch.backends.cudnn.benchmark = False
torch.use_deterministic_algorithms(True) torch.use_deterministic_algorithms(True)
model_info = MODELS[model]
req_mem = model_info['mem']
mem_gb = torch.cuda.mem_get_info()[1] / (10**9)
mem_gb *= mem_fraction
over_mem = mem_gb < req_mem
if over_mem:
logging.warn(f'model requires {req_mem} but card has {mem_gb}, model will run slower..')
shortname = model_info['short']
params = { params = {
'safety_checker': None,
'torch_dtype': torch.float16, 'torch_dtype': torch.float16,
'safety_checker': None 'cache_dir': cache_dir,
'variant': 'fp16'
} }
if model == 'runwayml/stable-diffusion-v1-5': match shortname:
case 'stable':
params['revision'] = 'fp16' params['revision'] = 'fp16'
if image: torch.cuda.set_per_process_memory_fraction(mem_fraction)
pipe_class = StableDiffusionImg2ImgPipeline
elif model == 'snowkidy/stable-diffusion-xl-base-0.9':
pipe_class = DiffusionPipeline
else:
pipe_class = StableDiffusionPipeline
pipe = pipe_class.from_pretrained( pipe = DiffusionPipeline.from_pretrained(
model, **params) model, **params)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config( pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(
pipe.scheduler.config) pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
if over_mem:
if not image: if not image:
pipe.enable_vae_slicing() pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
return pipe.to('cuda') pipe.enable_model_cpu_offload()
else:
if sys.version_info[1] < 11:
# torch.compile only supported on python < 3.11
pipe.unet = torch.compile(
pipe.unet, mode='reduce-overhead', fullgraph=True)
pipe = pipe.to('cuda')
return pipe
def txt2img( def txt2img(
@ -109,12 +138,6 @@ def txt2img(
steps: int = 28, steps: int = 28,
seed: Optional[int] = None seed: Optional[int] = None
): ):
assert torch.cuda.is_available()
torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(1.0)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
login(token=hf_token) login(token=hf_token)
pipe = pipeline_for(model) pipe = pipeline_for(model)
@ -142,12 +165,6 @@ def img2img(
steps: int = 28, steps: int = 28,
seed: Optional[int] = None seed: Optional[int] = None
): ):
assert torch.cuda.is_available()
torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(1.0)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
login(token=hf_token) login(token=hf_token)
pipe = pipeline_for(model, image=True) pipe = pipeline_for(model, image=True)
@ -188,12 +205,6 @@ def upscale(
output: str = 'output.png', output: str = 'output.png',
model_path: str = 'weights/RealESRGAN_x4plus.pth' model_path: str = 'weights/RealESRGAN_x4plus.pth'
): ):
assert torch.cuda.is_available()
torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(1.0)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
input_img = Image.open(img_path).convert('RGB') input_img = Image.open(img_path).convert('RGB')
upscaler = init_upscaler(model_path=model_path) upscaler = init_upscaler(model_path=model_path)
@ -202,17 +213,26 @@ def upscale(
convert_from_image_to_cv2(input_img), outscale=4) convert_from_image_to_cv2(input_img), outscale=4)
image = convert_from_cv2_to_image(up_img) image = convert_from_cv2_to_image(up_img)
image.save(output) image.save(output)
def download_all_models(hf_token: str): async def download_upscaler():
print('downloading upscaler...')
weights_path = Path('weights')
weights_path.mkdir(exist_ok=True)
upscaler_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
save_path = weights_path / 'RealESRGAN_x4plus.pth'
response = await asks.get(upscaler_url)
with open(save_path, 'wb') as f:
f.write(response.content)
print('done')
def download_all_models(hf_token: str, hf_home: str):
assert torch.cuda.is_available() assert torch.cuda.is_available()
trio.run(download_upscaler)
login(token=hf_token) login(token=hf_token)
for model in MODELS: for model in MODELS:
print(f'DOWNLOADING {model.upper()}') print(f'DOWNLOADING {model.upper()}')
pipeline_for(model) pipeline_for(model, cache_dir=hf_home)
print(f'DOWNLOADING IMAGE {model.upper()}')
pipeline_for(model, image=True)

View File

@ -1,15 +1,18 @@
#!/usr/bin/python #!/usr/bin/python
import logging
from pathlib import Path
import pytest import pytest
from skynet.db import open_new_database from skynet.db import open_new_database
from skynet.ipfs import AsyncIPFSHTTP
from skynet.ipfs.docker import open_ipfs_node
from skynet.nodeos import open_nodeos from skynet.nodeos import open_nodeos
@pytest.fixture(scope='session')
def ipfs_client():
with open_ipfs_node(teardown=True):
yield AsyncIPFSHTTP('http://127.0.0.1:5001')
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def postgres_db(): def postgres_db():
with open_new_database() as db_params: with open_new_database() as db_params:

View File

@ -0,0 +1,26 @@
#!/usr/bin/python
from pathlib import Path
async def test_connection(ipfs_client):
await ipfs_client.connect(
'/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv')
peers = await ipfs_client.peers()
assert '12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv' in [p['Peer'] for p in peers]
async def test_add_and_pin_file(ipfs_client):
test_file = Path('hello_world.txt')
with open(test_file, 'w+') as file:
file.write('Hello Skynet!')
file_info = await ipfs_client.add(test_file)
file_cid = file_info['Hash']
pin_resp = await ipfs_client.pin(file_cid)
assert file_cid in pin_resp
test_file.unlink()