Compare commits

..

151 Commits

Author SHA1 Message Date
Guillermo Rodriguez e88792c9d6
Fix docker paths 2024-11-02 14:52:22 -03:00
Guillermo Rodriguez 8a415b450f
Merge pull request #26 from guilledk/worker_upgrade_reloaded
Worker upgrade reloaded
2023-10-13 17:22:22 -03:00
Guillermo Rodriguez 8ddbf65d9f
Update pinner to new apis 2023-10-12 10:20:19 -03:00
Guillermo Rodriguez d9df50ad63
Minor readme tweaks 2023-10-09 08:52:04 -03:00
Guillermo Rodriguez 1222e11c16
Move docker related scripts to docker dir 2023-10-09 08:50:36 -03:00
Guillermo Rodriguez 3f780d6103
Add logo to readme 2023-10-09 08:44:33 -03:00
Guillermo Rodriguez 0a6d52ffaf
Fix missing quart dep 2023-10-09 07:50:39 -03:00
Guillermo Rodriguez f106c557f5
Fix readme 2023-10-09 07:43:57 -03:00
Guillermo Rodriguez 409df99a2c
Add frontend container & run instructions 2023-10-09 07:39:23 -03:00
Guillermo Rodriguez 20ee6c0317
Add configurable explorer and ipfs links 2023-10-08 20:12:07 -03:00
Guillermo Rodriguez edd6ccc3e1
Add worker benchmark api 2023-10-08 19:37:25 -03:00
Guillermo Rodriguez 3d2069d151
Simplify pipeline_for function and add the infra needed for diferent io/types than png 2023-10-08 18:00:18 -03:00
Guillermo Rodriguez ee1fdcc557
Go back to using gather on tg ipfs result getting 2023-10-08 16:36:29 -03:00
Guillermo Rodriguez 359e491d1f
Fix enqueue cli for img2img also fix worker img2img input get bug 2023-10-08 12:19:46 -03:00
Guillermo Rodriguez 50ae61c7b2
Add default value for autoconf 2023-10-08 11:06:06 -03:00
Guillermo Rodriguez cadd723191
Cancel other image task when one already finished on tg frontend ipfs image gather 2023-10-08 10:27:25 -03:00
Guillermo Rodriguez 16df97d731
Improve tg frontend ipfs results gathering parallelism 2023-10-08 10:13:55 -03:00
Guillermo Rodriguez d749dc4f57
Improve worker ipfs input data parallelizm 2023-10-08 09:54:31 -03:00
Guillermo Rodriguez aa1d52dba0
Add autoconfiguration feature for telegram frontend 2023-10-08 09:26:43 -03:00
Guillermo Rodriguez d3b5d56187
Add new data gathering mechanic on worker and mp tractor backend 2023-10-07 21:28:52 -03:00
Guillermo Rodriguez e802689523
Add new/legacy ipfs image mechanic on input image gathering 2023-10-07 14:55:30 -03:00
Guillermo Rodriguez 1780f1a360
Update example --env no longer needed on docker 2023-10-07 12:42:05 -03:00
Guillermo Rodriguez cc4a4b5189
Fix cli entrypoints to use new config, improve competitor cancel logic and add default docker image to py311 image 2023-10-07 12:32:00 -03:00
Guillermo Rodriguez 5437af4d05
Add new config to .gitignore 2023-10-07 11:12:45 -03:00
Guillermo Rodriguez 5b6e18e1ef
Fix import bug and only enable unet compilation on high end cards 2023-10-07 11:12:15 -03:00
Guillermo Rodriguez 7cd539a944
Make new non_compete optional, also ipfs_gateway 2023-10-07 11:11:47 -03:00
Guillermo Rodriguez b7b267a71b
Dont reference python version on docker instructions 2023-10-07 11:04:57 -03:00
Guillermo Rodriguez 342dd9ac1c
Add whitelist & blacklist 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez ad1a9ef9ea
Add anyio error to failable 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez b372f50130
Create separate docker images for python 3.10 and 3.11 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 9ef2442123
Switch config to toml 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 93203ab533
Only check if should cancel inference every two steps, also pipe to cuda if cpu offloading is off 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 9fa5a01c34
Fix image getting logic 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez a9b05b7ee7
Add try to make gateway conf optional on telegram client 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez de8c7595db
Fix some wrong config load keys on telegram entrypoint 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 10044c6d12
Add new ipfs links to telegram bot 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez c6e58c36d8
Make non compete list come from a file named .non-compete 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 01c78b5d20
Make gpu work cancellable using trio threading apis!, also make docker always reinstall package for easier development 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez 47d9f59dbe
Start setting HF env vars from config 2023-10-07 11:01:41 -03:00
Guillermo Rodriguez d7ccbe7023
Add --name to docker worker launch command 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 08854562ef
woops make xformers part of optional cuda group 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 537670d1f3
Fix mini bug on docker entry point 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 24fae4c451
Bump version number 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 3622c8ea11
Add venv to dockerignore
Improve readme
Improve dockerization as ipfs cli exec runs not needed anymore
Fix pyproject toml for gpu workers
Add more sections on example config
Drop and siomplify many cli commands, try to use config.ini for everything now
Use more dynamic imports on cli to speed up startup
Improve model pipelines to allow low mem cards to run big models
Add upscaler download to `skynet download` cmd
2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 454545d096
Switch to using poetry package manager 2023-10-07 11:01:40 -03:00
Guillermo Rodriguez 82458bb4c8
Merge pull request #23 from guilledk/full_http_ipfs
Async IPFS apis, drop docker on worker & other nice sprites ☠
2023-10-03 13:00:45 -03:00
Guillermo Rodriguez 504d6cd730
Add new ipfs options to cli frontend
Add async ipfs for discord
2023-09-28 21:43:30 -03:00
Guillermo Rodriguez fe4574c5dc
Remove old docker stuff and upgrade telegram frontend to use ipfs async apis 2023-09-28 21:23:04 -03:00
Guillermo Rodriguez 1b13cf25cc
Add test for new ipfs async apis, fix cli entrypoints endpoint loading to new format 2023-09-24 15:23:25 -03:00
Guillermo Rodriguez 58f208afa2
Update config example 2023-09-24 13:18:05 -03:00
Guillermo Rodriguez 01cbc736a0
Create fully async ipfs client, and stop using docker on worker 2023-09-24 13:12:49 -03:00
Zoltan 7f50952088
Merge pull request #19 from guilledk/general_frontend_fixes
General frontend fixes
2023-08-23 11:57:18 -04:00
Konstantine Tsafatinos 75268decc4 fix default model and increments for discord ui 2023-08-23 11:53:41 -04:00
Guillermo Rodriguez 35f8276e4e
Fixed default cmdline testnet urls 2023-07-28 12:05:02 -03:00
Guillermo Rodriguez ffcf9dc905
Merge pull request #5 from guilledk/decentralize
First fully decentralized `skynet` prototype
2023-07-28 11:16:51 -03:00
Guillermo Rodriguez c201b78bf0
Fix help text, increase ipfs get image timeout, fix work_request increment generated bug pointed out by zoltan 2023-07-28 11:13:59 -03:00
Guillermo Rodriguez 713884e192
Provide both xl models 2023-07-27 13:25:00 -03:00
Guillermo Rodriguez 440bb015cd
Fix stablexl pipeline 2023-07-27 12:19:09 -03:00
Guillermo Rodriguez 4082adf184
Update stablexl 0.9 to 1.0 2023-07-26 16:44:46 -03:00
Guillermo Rodriguez 89c413a612
Bump version number, also telegram max image limit and disable in private for now 2023-07-22 16:53:00 -03:00
Guillermo Rodriguez dc7c43fc95
Merge pull request #11 from guilledk/discord-bot
discord bot for skynet
2023-07-22 15:37:51 -03:00
Konstantine Tsafatinos fd8ea3299a update intro message 2023-07-21 21:37:31 -04:00
Konstantine Tsafatinos 965393907f add intro message, edit, again 2023-07-21 19:14:39 -04:00
Konstantine Tsafatinos 751812ec52 add intro message, edit, again 2023-07-21 19:13:33 -04:00
Konstantine Tsafatinos 26684f7b83 add intro message, edit 2023-07-21 19:12:32 -04:00
Konstantine Tsafatinos d74bfb4c59 add intro message 2023-07-21 19:09:11 -04:00
Konstantine Tsafatinos 469e90e650 change back the interval to 60, and double ipfs interval 2023-07-21 18:57:28 -04:00
Konstantine Tsafatinos 70c13c242a change the footer text, and increase timeout interval to 2 mins 2023-07-21 18:52:15 -04:00
Konstantine Tsafatinos 3315b05888 change the footer text again, again, again 2023-07-21 18:46:07 -04:00
Konstantine Tsafatinos 1e94f43965 change the footer text again, again 2023-07-21 18:41:56 -04:00
Konstantine Tsafatinos 9ad30ffc10 change the footer text again 2023-07-21 18:38:51 -04:00
Konstantine Tsafatinos aa9cde50c7 change the footer text 2023-07-21 18:33:16 -04:00
Konstantine Tsafatinos 5ca350d5c0 fix timeout message 2023-07-21 17:32:19 -04:00
Konstantine Tsafatinos 82ea4d57c4 add error message for ipfs fail 2023-07-21 17:04:41 -04:00
Konstantine Tsafatinos 4260187208 add img2img support, add stats and donate button, finalize UI, add live updates 2023-07-21 16:57:54 -04:00
zoltan 58c6a2070e change send function on failure (not sure if this fixed it) 2023-07-21 03:43:59 +00:00
Konstantine Tsafatinos 53ed74e9a3 add and finalize buttons 2023-07-20 20:54:59 -04:00
Konstantine Tsafatinos 2440fe32db update max size params and add discord return card 2023-07-20 17:02:14 -04:00
Konstantine Tsafatinos 2e47ee97f2 add delay for button to reappear 2023-07-20 01:35:54 -04:00
Konstantine Tsafatinos 0eef370d15 fix /command bug 2 2023-07-20 01:27:21 -04:00
Konstantine Tsafatinos 2b2e82e28f fix /command bug 2023-07-20 01:24:49 -04:00
Konstantine Tsafatinos 8625b5747b add initial buttons, help and txt2img 2023-07-20 01:16:22 -04:00
Konstantine Tsafatinos ff0114d341 change stable2 model and how stablexl loads, add reqs 2023-07-19 17:25:28 -04:00
Konstantine Tsafatinos 08da0681cd add param for stablexl 2023-07-19 16:14:03 -04:00
Konstantine Tsafatinos ae348c7c6f add param for stablexl 2023-07-19 16:11:48 -04:00
Konstantine Tsafatinos bcb499448e add stable2 and stablexl models 2023-07-19 15:34:06 -04:00
Konstantine Tsafatinos 6e8d43e00c add new models 2023-07-19 15:21:57 -04:00
Konstantine Tsafatinos 99744bab3e fix red 2023-07-19 13:48:06 -04:00
Konstantine Tsafatinos 239faed71f add redo command 2023-07-19 13:40:11 -04:00
Konstantine Tsafatinos d22c0556d4 add clean cool words and remove slut from copy 2023-07-19 13:28:08 -04:00
Konstantine Tsafatinos ecd7d17bbf change image gen to a reply 2023-07-19 13:23:20 -04:00
zoltan c7cd150316 remove print of a_push_action res 2023-07-19 17:05:13 +00:00
Konstantine Tsafatinos 06827a0e70 add block on any channel but skynet 2023-07-19 12:28:57 -04:00
Konstantine Tsafatinos 609c741ae9 get db, config, help, cool, and txt2img working 2023-07-19 00:58:55 -04:00
Konstantine Tsafatinos 1d7633d340 get initial discord bot working with hardcoded config and image return 2023-07-18 23:44:58 -04:00
Konstantine Tsafatinos fadb4eab6d update params for work queue 2023-07-18 12:40:21 -04:00
Konstantine Tsafatinos 7b0c1f0868 initial discord bot 2023-07-17 01:18:08 -04:00
Guillermo Rodriguez b48ce8ac3f
Merge pull request #9 from guilledk/spam-command
add enqueue spam command
2023-07-04 01:49:03 -03:00
Konstantine Tsafatinos 619ffe71cc address pr comments, remove commented code 2023-07-04 00:29:51 -04:00
Konstantine Tsafatinos de59e3aa1d change request_id type to int in dequeue 2023-07-03 09:16:04 -04:00
Konstantine Tsafatinos d862954377 add binary data option to enqueue 2023-06-29 19:51:33 -04:00
Konstantine Tsafatinos d680ea9b72 convert all push_action funcs to async a_push_action 2023-06-29 19:37:50 -04:00
Konstantine Tsafatinos f17f11e5f3 fix enqueue and deposit to use a_push_action 2023-06-26 23:24:32 -04:00
Konstantine Tsafatinos 1b80de6228 change push_action to a_push_action in enqueue cli 2023-06-26 21:11:24 -04:00
Konstantine Tsafatinos c8471ff85b add min-verification 2023-06-26 18:56:44 -04:00
Konstantine Tsafatinos a452065779 remove unneeded if block 2023-06-26 18:10:55 -04:00
Konstantine Tsafatinos 1cbb1dd7f3 add enqueue spam command 2023-06-26 18:02:21 -04:00
Guillermo Rodriguez 61ab42c118
Merge pull request #8 from guilledk/ipfs-reconnect
add ipfs reconnect before publishing if not connected to peers
2023-06-26 17:06:32 +00:00
Konstantine Tsafatinos 6a6bdaab0d update check_connect to return list of peers 2023-06-26 12:43:00 -04:00
Konstantine Tsafatinos f8c744e6b4 add error logging to ipfs add func 2023-06-26 12:38:26 -04:00
Konstantine Tsafatinos acd8ba91e5 change ipfs reconnect logic, use config vars 2023-06-26 09:59:32 -04:00
Konstantine Tsafatinos 8f24c72762 add ipfs reconnect before publishing if not connected to peers 2023-06-25 17:21:25 -04:00
Guillermo Rodriguez cbc9a89bb8
Fix old hardcoded domain on img2img input data fetcher 2023-06-12 09:43:27 -03:00
Guillermo Rodriguez 1e05357a72
Zoltan's review comments 2023-06-11 20:27:41 -03:00
Guillermo Rodriguez 120d97f478
Make frontend more resilient and remove from pinned on error to allow retry 2023-06-10 09:38:24 -03:00
Guillermo Rodriguez 44bfc5e9e7
Make worker more resilient by using failable wrapper on network calls, modularize ipfs module and pinner code, drop ipfs links from telegram response and make explorer link easily configurable 2023-06-10 09:34:03 -03:00
Guillermo Rodriguez c8a0a390a6
Fix for img2img mode on new worker system 2023-06-08 21:25:07 -03:00
Guillermo Rodriguez 91edb2aa56
Frontend db model name related fixes, and gpu worker fixes when swapping models 2023-06-06 12:27:40 -03:00
Guillermo Rodriguez aa41c08d2f
Upscaler fix & frontend model selection naming chanes 2023-06-05 11:52:16 -03:00
Guillermo Rodriguez bbc5751837
Separate worker into submodules 2023-06-04 17:51:43 -03:00
Guillermo Rodriguez fc513b89af
Retry pin 2023-06-04 14:41:26 -03:00
Guillermo Rodriguez d5b04a673c
Update smart contract 2023-06-04 14:18:57 -03:00
Guillermo Rodriguez 0cb2565d65
Update py-leap to 1a14 2023-06-04 01:11:32 -03:00
Guillermo Rodriguez 64a15a0ab9
Fix db schema for users outside int32 range 2023-06-04 01:11:03 -03:00
Guillermo Rodriguez 27fe05c3e7
Vast improvement to telegram frontedn 2023-06-03 20:17:56 -03:00
Guillermo Rodriguez 31cca4f487
Woops fixed import 2023-06-03 12:46:21 -03:00
Guillermo Rodriguez 006d15137c
Updated ipfs remote and ipfs launch logic 2023-06-03 12:44:22 -03:00
Guillermo Rodriguez 13c6e85ac9
Make pinner less spammy lool, and gpu more resiliant 2023-06-03 12:22:24 -03:00
Guillermo Rodriguez 320f13260c
Update nodeos genesis init 2023-06-01 19:21:42 -03:00
Guillermo Rodriguez e6e3dc2e63
Update README.md 2023-05-30 00:41:32 -03:00
Guillermo Rodriguez 40ba84c109
Drop cleos docker container usage for push action in frontend and dgpu, also make pinner way more agresive 2023-05-30 00:32:17 -03:00
Guillermo Rodriguez 25413a68cc
Fix logging bug on pinner 2023-05-29 23:34:02 -03:00
Guillermo Rodriguez 731a64494f
Update url defaults 2023-05-29 21:00:47 -03:00
Guillermo Rodriguez 33d2ca281b
Pin all the things 2023-05-29 19:44:31 -03:00
Guillermo Rodriguez 1494e47b34
Snappier dgpu, fix captioning & gitignores 2023-05-29 19:03:39 -03:00
Guillermo Rodriguez c26b4fc468
Changes to frontend schema for better accuracy on configs
New fix for nonce / request hashing system which had a bug with multi request per user case
Add queue command to frontend
Better meta info captions
2023-05-29 14:48:10 -03:00
Guillermo Rodriguez 25c86b5eaf
Rework gpu worker logic to work better in parallel with other workers 2023-05-29 13:43:03 -03:00
Guillermo Rodriguez 2b18fa376b
Add redo support to img2img also switch pinner to use http api 2023-05-29 13:08:01 -03:00
Guillermo Rodriguez 22c403d3ae
Add autowithdraw switch, start storing input images on ipfs 2023-05-29 00:46:47 -03:00
Guillermo Rodriguez 303ed7b24f
Add wait time on pinner and autocreate ipfs node directories on startup 2023-05-28 22:20:06 -03:00
Guillermo Rodriguez a85518152a
Add frontend image reduction and fix ipfs sudo issue 2023-05-28 20:44:47 -03:00
Guillermo Rodriguez e63d395d5c
Frontend DB fixes and starting to add img2img 2023-05-28 20:17:55 -03:00
Guillermo Rodriguez 5e017ffac0
Telegram frontend fixes and create pinner 2023-05-28 18:23:51 -03:00
Guillermo Rodriguez 1d7d11a9c1
Add .ini example & new account config
Add new smart contract clis config etc
Update GPU worker software to match contract updates
Do dynamic nodeos genesis
2023-05-27 22:42:27 -03:00
Guillermo Rodriguez 3607c568de
Drop separate reqs file for tests
Update docker containers
Create cli helpers for interacting with skynet
Add test
Begin adding hyperion api to telegram frontend
2023-05-27 17:50:47 -03:00
Guillermo Rodriguez dcd020f0da
Init chain with GPU token, tweak dgpu init 2023-05-24 13:24:46 -03:00
Guillermo Rodriguez 0b312ff961
Use leap as network + auth layer, we decentarlized now 2023-05-22 06:10:51 -03:00
Guillermo Rodriguez 79901c85ca
Merge pull request #4 from guilledk/net_rework
Net rework
2023-05-21 16:00:39 -03:00
Guillermo Rodriguez c430166895
Update LICENSE 2023-04-09 19:50:08 -03:00
Guillermo Rodriguez 5e8787b4fc
Add download command
Fix ini config loading
Update test requirements
Fix attempt_insecure and connection simple dgpu tests
2023-03-20 11:09:32 -03:00
Guillermo Rodriguez 4feed81662
Network rework in progress, also swap db to be on frontend layer
Simplify protobuf proto
Add manual telegram test
Add new .ini config system add example and gitignore entry
Expose ports on dgpu Dockerfile
2023-03-20 11:09:30 -03:00
71 changed files with 10034 additions and 2882 deletions

View File

@ -7,3 +7,4 @@ outputs
*.egg-info *.egg-info
**/*.key **/*.key
**/*.cert **/*.cert
.venv

5
.gitignore vendored
View File

@ -1,3 +1,4 @@
skynet.toml
.python-version .python-version
hf_home hf_home
outputs outputs
@ -6,3 +7,7 @@ secrets
*.egg-info *.egg-info
**/*.key **/*.key
**/*.cert **/*.cert
docs
ipfs-docker-data
ipfs-staging
weights

View File

@ -1,19 +0,0 @@
from python:3.10.0
env DEBIAN_FRONTEND=noninteractive
workdir /skynet
copy requirements.test.txt requirements.test.txt
copy requirements.txt requirements.txt
copy pytest.ini ./
copy setup.py ./
copy skynet ./skynet
run pip install \
-e . \
-r requirements.txt \
-r requirements.test.txt
copy scripts ./
copy tests ./

View File

@ -1,34 +0,0 @@
from nvidia/cuda:11.7.0-devel-ubuntu20.04
from python:3.10.0
env DEBIAN_FRONTEND=noninteractive
run apt-get update && \
apt-get install -y ffmpeg libsm6 libxext6
workdir /skynet
copy requirements.cuda* ./
run pip install -U pip ninja
run pip install -v -r requirements.cuda.0.txt
run pip install -v -r requirements.cuda.1.txt
run pip install -v -r requirements.cuda.2.txt
copy requirements.test.txt requirements.test.txt
copy requirements.txt requirements.txt
copy pytest.ini pytest.ini
copy setup.py setup.py
copy skynet skynet
run pip install -e . \
-r requirements.txt \
-r requirements.test.txt
env PYTORCH_CUDA_ALLOC_CONF max_split_size_mb:128
env NVIDIA_VISIBLE_DEVICES=all
env HF_HOME /hf_home
copy scripts scripts
copy tests tests

665
LICENSE
View File

@ -1,11 +1,662 @@
A menos que sea especificamente indicado en el cabezal del archivo, se reservan GNU AFFERO GENERAL PUBLIC LICENSE
todos los derechos sobre este codigo por parte de: Version 3, 19 November 2007
Guillermo Rodriguez, guillermor@fing.edu.uy Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
ENGLISH LICENSE: Preamble
Unless specifically indicated in the file header, all rights to this code are The GNU Affero General Public License is a free, copyleft license for
reserved by: software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.
Guillermo Rodriguez, guillermor@.edu.uy

104
README.md
View File

@ -1,2 +1,104 @@
# skynet # skynet
### decentralized compute platform <div align="center">
<img src="https://explorer.skygpu.net/v2/explore/assets/logo.png" width=512 height=512>
</div>
## decentralized compute platform
### native install
system dependencies:
- `cuda` 11.8
- `llvm` 10
- `python` 3.10+
- `docker` (for ipfs node)
```
# create and edit config from template
cp skynet.toml.example skynet.toml
# install poetry package manager
curl -sSL https://install.python-poetry.org | python3 -
# install
poetry install
# enable environment
poetry shell
# test you can run this command
skynet --help
# launch ipfs node
skynet run ipfs
# to launch worker
skynet run dgpu
```
### dockerized install
## frontend
system dependencies:
- `docker`
```
# create and edit config from template
cp skynet.toml.example skynet.toml
# pull runtime container
docker pull guilledk/skynet:runtime-frontend
# run telegram bot
docker run \
-it \
--rm \
--network host \
--name skynet-telegram \
--mount type=bind,source="$(pwd)",target=/root/target \
guilledk/skynet:runtime-frontend \
skynet run telegram --db-pass PASSWORD --db-user USER --db-host HOST
```
## worker
system dependencies:
- `docker` with gpu enabled
```
# create and edit config from template
cp skynet.toml.example skynet.toml
# pull runtime container
docker pull guilledk/skynet:runtime-cuda
# or build it (takes a bit of time)
./build_docker.sh
# launch simple ipfs node
./launch_ipfs.sh
# run worker with all gpus
docker run \
-it \
--rm \
--gpus all \
--network host \
--name skynet-worker \
--mount type=bind,source="$(pwd)",target=/root/target \
guilledk/skynet:runtime-cuda \
skynet run dgpu
# run worker with specific gpu
docker run \
-it \
--rm \
--gpus '"device=1"' \
--network host \
--name skynet-worker-1 \
--mount type=bind,source="$(pwd)",target=/root/target \
guilledk/skynet:runtime-cuda \
skynet run dgpu
```

View File

@ -1,7 +0,0 @@
docker build \
-t skynet:runtime-cuda \
-f Dockerfile.runtime+cuda .
docker build \
-t skynet:runtime \
-f Dockerfile.runtime .

View File

@ -1,33 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFxDCCA6wCAQAwDQYJKoZIhvcNAQENBQAwgacxCzAJBgNVBAYTAlVZMRMwEQYD
VQQIDApNb250ZXZpZGVvMRMwEQYDVQQHDApNb250ZXZpZGVvMRowGAYDVQQKDBFz
a3luZXQtZm91bmRhdGlvbjENMAsGA1UECwwEbm9uZTEcMBoGA1UEAwwTR3VpbGxl
cm1vIFJvZHJpZ3VlejElMCMGCSqGSIb3DQEJARYWZ3VpbGxlcm1vckBmaW5nLmVk
dS51eTAeFw0yMjEyMTExNDM3NDVaFw0zMjEyMDgxNDM3NDVaMIGnMQswCQYDVQQG
EwJVWTETMBEGA1UECAwKTW9udGV2aWRlbzETMBEGA1UEBwwKTW9udGV2aWRlbzEa
MBgGA1UECgwRc2t5bmV0LWZvdW5kYXRpb24xDTALBgNVBAsMBG5vbmUxHDAaBgNV
BAMME0d1aWxsZXJtbyBSb2RyaWd1ZXoxJTAjBgkqhkiG9w0BCQEWFmd1aWxsZXJt
b3JAZmluZy5lZHUudXkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCu
HdqGPtsqtYqfIilVdq0MmqfEn9g4T+uglfWjRF2gWV3uQCuXDv1O61XfIIyaDQXl
VRqT36txtM8rvn213746SwK0jx9+ln5jD3EDbL4WZv1qvp4/jqA+UPKXFXnD3he+
pRpcDMu4IpYKuoPl667IW/auFSSy3TIWhIZb8ghqxzb2e2i6/OhzIWKHeFIKvbEA
EB6Z63wy3O0ACY7RVhHu0wzyzqUW1t1VNsbZvO9Xmmqm2EWZBJp0TFph3Z9kOR/g
0Ik7kxMLrGIfhV5/1gPQlNr3ADebGJnaMdGCBUi+pqeZcVnGY45fjOJREaD3aTRG
ohZM0Td40K7paDVjUvQ9rPgKoDMsCWpu8IPdc4LB0hONIO2KycFb49cd8zNWsetj
kHXxL9IVgORxfGmVyOtNGotS5RX6R+qwsll3qUmX4XjwvQMAMvATcSkY26CWdCDM
vGFp+0REbVyDfJ9pwU7ZkAxiWeAoiesGfEWyRLsl0fFkaHgHG+oPCH9IO63TVnCq
E6NGRQpHfJ5oV4ZihUfWjSFxOJqdFM3xfzk/2YGzQUgKVBsbuQTWPKxE0aSwt1Cf
Ug4+C0RSDMmrquRmhRn/BWsSRl+2m17rt1axTA4pEVGcHHyKSowEFQ68spD1Lm2K
iU/LCPBh4REzexwjP+onwHALXoxIEOLiy2lEdYgWnwIDAQABMA0GCSqGSIb3DQEB
DQUAA4ICAQBtTZb6PJJQXtF90MD4Hcgj+phKkbtHVZyM198Giw3I9f2PgjDECKb9
I7JLzCUgpexKk1TNso2FPNoVlcE4yMO0I0EauoKcwZ1w9GXsXOGwPHvB9hrItaLs
s7Qxf+IVgKO4y5Tv+8WO4lhgShWa4fW3L7Dpk0XK4INoAAxZLbEdekf2GGqTUGzD
SrfvtE8h6JT+gR4lsAvdsRjJIKYacsqhKjtV0reA6v99NthDcpwaStrAaFmtJkD3
6G3JVU0JyMBlR1GetN0w42BjVHJ2l7cPm405lE2ymFwcl7C8VozXXi4wmfVN+xlh
NOVSbl/QUiMUyt44XPhPCbgopxLqhqtvGzBl+ldF1AR4aaukXjvS/8VtFZ3cfx7n
n5NYxvPnq3kwlFNHgppt+u1leGrzxuesGNQENQd3shO/S9T4I92hAdk2MRTivIfv
m74u6RCtHqDviiOFzF7zcqO37wCrb1dnfS1N4I6/rCf6XtxlRGa8Cp9z4DTKjwAC
5z5irJb+LSJkFXA/zIFpBjjKBdyhjYGuXrbJWdL81kTcYRqjE99XfZaTU8L43qVd
TUaIvQGTtx8k7WGmeTRHk6SauCaXSfeXwYTpEZpictUI/uWo/KJRDL/aE8HmBeH3
pr+cfDu7erTLH+GG5ZROrILf4929Jd7OF4a0nHUnZcycBS0CjGHVHA==
-----END CERTIFICATE-----

View File

@ -1,52 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQCyAuCwwnoENeYe
B0159sH47zedmRaxcUmC/qmVdUptzOxIHpUCSAIy+hoR5UOhnRsmjj7Y0kUWtlwj
bHAKHcuUn4sqLBb0nl6kH79DzP/4YCQM3GEIXzE6wy/zmnYrHz53Ci7DzmMcRM3n
MwXDVPPpKXzpmI/yassKxSltBKgbh65U3oOheiuFygOlAkT4fUaXX5Bf9DECZBsj
ewf9WvHzLGN2eQt/YWYxJMstgAecHLlRmLbKoYD/P+O0K1ybmhMDItcXE49kNC4s
Rvq7MUt8B0bi8SlRxv5plAbZBiyMilrxf3yCCgYaTsqtt3x+CSrAWjzYIzEzD5aZ
1+s5O2jsqPYkbTvA4NT/hDnWHkkr7YcBRwQn1iMe2tMUTTsWotIYWH87++BzDAWG
3ZBkqNZ4mUdA3usk2ZPO0BwWNxlb0AqOlAJUYSoCsm3nBPT08rVvumQ44hup6XPW
L5KIDyL5+Fl8RDgDF8cpCfrijdL+U+GoHmmJYM6zMkrGqD7BD+WJgw9plgbaWUBI
q4aimXF4PrBJAAX5IRyZK+EDDH0AREL3qoZIQVvJR+yGIKTixpyVKtj6jm1OY4Go
iXxRLaFrc4ucT9+PxRHo9zYtNIijub4eXuU5nveswptmCsNa4spTO2XCkHh6IE0Z
B4oALC4lrC279WY+3TaOpv/roGzG9QIDAQABAoICABfpXGFMs7MzwkYvrkU/KO3V
bwppHAFDOcqyMU7K7e/d4ly1rvJwKyDJ3mKfrKay7Ii7UXndP5E+IcD9ufcXQCzQ
rug/+pLAC0UkoT6W9PNaMWgrhOU+VDs+fjHM19QRuFmpMSr1jZ6ofLgdGchpSvJR
CQnKh9uFDjfTethoEw96Tv1GKTcHAChSleFpHUv7wqsRbTABJJbbokGb2duQhzD7
uh3vQzodzT+2CjeBxoPpNS40GKm+FA6KzdLP2FAWhuNESibmu7uMFCpicR+1ZBxe
+zNU4xCsbamk9rPZqSD1HM4/1RZqs53TuP9TcbzvDPfAUgKpMjICWrUuVIHgQcb/
H3lJbsusZccFkl+B4arncUu7oyYWsw+OLHq/khja1RrJu6/PDDfcqY0cSAAsCKJf
ChiHVyVbhZ6b9g1MdYLNPlcJrpgCVX+PisqLqY/RqQGIln6D0sBK1+MC6TjFW3zA
ca3Dhun18JBZ73mmlGj7LoOUojtnnxy5YVUdB75tdo5BqilGR1nLurJupg9Nkgeq
C7nbA+rZ93MKHptayko91nc7yLzsMRV8PDFhE2UhZWRZfJ5yAW/IaJBZpvTvSYM3
5lTgAn1o34mnykuNC3sK5tbCAMb0YbCJtmotRwBIqlFHqbH+TK07CW2lnEkqZ8ID
YFTpAJlgKgsdhsd5ZCkpAoIBAQDQMvn4iBKvnhCeRUV/6AOHcOsgwJkV/G61Gz/G
F0mx0kPsaPugNX1VzF15R+vN1kbk3sQ9bDP6FfsX7jp2EjRqGEb9mJ8BoIbSHLJ4
dDT7M90TMMYepCVoFMC03Hh30vxH3QokgV3E1lakXCwl1dheRz5czT0BL9VuBkpG
x8vGpVfX4VqLliOWK72wEYdfohUTynb2OkRP/e6woBRxb3hYLqpN7nVHVRiMFBgG
+AvpLNv/oSYBOXj9oRBOwVLZaPV8N1p4Pv7WXL+B7E47Z9rUYNzGFf+2iM1uDdrO
xHkAocgMM/sL81sJaj1khoYRLC8IpAxBG8NqRP6xzeGcLVLHAoIBAQDa4ZdEDvqA
gJmJ4vgivIX7/zv7/q9c/nkNsnPiXjMys6HRdwroQjT7wrxO5/jJX9EDjM98dSFg
1HFJWJulpmDMpIzzwC6DLxZWd+EEqG4Pyv50VGmGuwmqDwWAP7v/pMPwUEvlsGYZ
Tvlebr4jze9vz8MiRw3qBp0ASWpDWgySt3zm0gDWRaxqvZbdqlLvK/YTta+4ySay
dfkqMG4SGM2m7Rc6H+DKqhwADoyd3oVrFD7QWCZTUUm414TgFFk+uils8Pms6ulG
u+mZT29Jaq8UzoXLOmf+tX2K07oA98y0HfrGMAto3+c0x9ArIPrtwHuUGJiTdt3V
ShBPP9AzaBxjAoIBAQCF+3gwP2k/CQqKv+t035t9yuYVgrxBkNyxweJtmUj8nWLG
vdzIggOxdj3lMaqHIVEoMk+5c2uTkhevk8ideSOv7wWoZ1JUWrjIeF1F9QqvafXo
RqgIyfukmk5VVdhUzDs8B/xh97qfVIwXY5Wpl4+RRGnWkOGkZOMF1hhwqlzx7i+0
prp9P9aQ6n880lr66TSFMvMRi/ewPqsfkTT2txSMMyO32TAyAoo0gy3fNjt8CDlf
rZXmjdTV65OyCulFLi1kjb6zyV54FuHLO4Yw5qnFqLwK4ddY4XrKSzI3g+qWxIYX
jFAPpcE9MthlW8jlPjjaZ6/XKoW8WsBJLkP1HJm7AoIBAAm9J+HbWMIG9s3vz2Kc
SMnhnWWk+2CD4hb97bIQxu5ml7ieN1oGOB1LmN1Z7PPo03/47/J1s7p/OVsuGh7Q
vFXerHbcAjXMDo5iXxy58cu6GIBMkTVxdQigCnqeW1sQlbdHm1jo9GID5YySGNu2
+gRbli8cQj47dRjiK1w70XtltqT+ixL9nqJRNTk/rtj9d8GAwATUzmf6X8/Ev+EG
QYA/5Fyttm7OCtjlzNPpZr5Q9EqI4YurfkA/NqZRwXbNCbLTNgi/mwmOquIraqQ1
nvyqA8H7I01t/dwDd687V1xcSSAwWxGbhMoQae7BVOjnO5hnT8Kf81beKMOd70Ga
TEkCggEAI8ICJvOBouBO92330s8smVhxPi9tRCnOZ0mg5MoR8EJydbOrcRIap1w7
Ai0CTR6ziOgMaDbT52ouZ1u0l6izYAdBdeSaPOiiTLx8vEE+U7SpNR3zCesPtZB3
uvGOY2mVwyfZH2SUc4cs+uzDnAGhPqC7/RSFPMoctXf46YpGc9auyjdesE395KLX
L043DaE9/ng9B1jCnhu5TUyiUtAluHvRGQC32og6id2KUEhmhGCl5vj2KIVoDmI2
NpeBLCKuaBNi/rOG3zyHLjg1wCYidjE7vwjY6UyemjbW48LI8KN6Sl5rQdaDu+bG
lWI2XLI4C2zqDBVmEL2MuzL0FrWivQ==
-----END PRIVATE KEY-----

View File

@ -1,33 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFxDCCA6wCAQIwDQYJKoZIhvcNAQENBQAwgacxCzAJBgNVBAYTAlVZMRMwEQYD
VQQIDApNb250ZXZpZGVvMRMwEQYDVQQHDApNb250ZXZpZGVvMRowGAYDVQQKDBFz
a3luZXQtZm91bmRhdGlvbjENMAsGA1UECwwEbm9uZTEcMBoGA1UEAwwTR3VpbGxl
cm1vIFJvZHJpZ3VlejElMCMGCSqGSIb3DQEJARYWZ3VpbGxlcm1vckBmaW5nLmVk
dS51eTAeFw0yMjEyMTExNTE1MDNaFw0zMjEyMDgxNTE1MDNaMIGnMQswCQYDVQQG
EwJVWTETMBEGA1UECAwKTW9udGV2aWRlbzETMBEGA1UEBwwKTW9udGV2aWRlbzEa
MBgGA1UECgwRc2t5bmV0LWZvdW5kYXRpb24xDTALBgNVBAsMBG5vbmUxHDAaBgNV
BAMME0d1aWxsZXJtbyBSb2RyaWd1ZXoxJTAjBgkqhkiG9w0BCQEWFmd1aWxsZXJt
b3JAZmluZy5lZHUudXkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCy
AuCwwnoENeYeB0159sH47zedmRaxcUmC/qmVdUptzOxIHpUCSAIy+hoR5UOhnRsm
jj7Y0kUWtlwjbHAKHcuUn4sqLBb0nl6kH79DzP/4YCQM3GEIXzE6wy/zmnYrHz53
Ci7DzmMcRM3nMwXDVPPpKXzpmI/yassKxSltBKgbh65U3oOheiuFygOlAkT4fUaX
X5Bf9DECZBsjewf9WvHzLGN2eQt/YWYxJMstgAecHLlRmLbKoYD/P+O0K1ybmhMD
ItcXE49kNC4sRvq7MUt8B0bi8SlRxv5plAbZBiyMilrxf3yCCgYaTsqtt3x+CSrA
WjzYIzEzD5aZ1+s5O2jsqPYkbTvA4NT/hDnWHkkr7YcBRwQn1iMe2tMUTTsWotIY
WH87++BzDAWG3ZBkqNZ4mUdA3usk2ZPO0BwWNxlb0AqOlAJUYSoCsm3nBPT08rVv
umQ44hup6XPWL5KIDyL5+Fl8RDgDF8cpCfrijdL+U+GoHmmJYM6zMkrGqD7BD+WJ
gw9plgbaWUBIq4aimXF4PrBJAAX5IRyZK+EDDH0AREL3qoZIQVvJR+yGIKTixpyV
Ktj6jm1OY4GoiXxRLaFrc4ucT9+PxRHo9zYtNIijub4eXuU5nveswptmCsNa4spT
O2XCkHh6IE0ZB4oALC4lrC279WY+3TaOpv/roGzG9QIDAQABMA0GCSqGSIb3DQEB
DQUAA4ICAQBic+3ipdfvmCThWkDjVs97tkbUUNjGXH95okwI0Jbft0iRivVM16Xb
hqGquQK4OvYoSTHTmsMH19/dMj0W/Bd4IUYKl64rG8YJUbjDbO1y7a+wF2TaONyn
z0k3zRCky+IwxqYf9Ppw7s2/cXlt3fOEg0kBr4EooXd+bFCx/+JQIxU3vfL8cDQK
dp55vkh+ROt8eR7ai1FiAC8J1prswyT092ktco2fP0MI4uQ3iQfl07NyI68UV1E5
aIsOPU3SKMtxz5FLm8JEUVhZRJZJWQ/o/iB/2cdn4PDBGkrBhgU6ysMPNX51RlCM
aHRsMyoO2mFfIlm0jW0C5lZ6nKHuA1sXPFz1YxzpvnRgRlHUlfoKf1wpCeF+5Qz+
qylArHPSu69CA38wLCzJ3wWTaGVL1nuH1UPR2Pg71HGBYqLCD2XGa8iLShO1DKl7
1bAeHOvzryngYq35rky1L3cIquinAwCP4QKocJK3DJAD5lPqhpzO1f2/1BmWV9Ri
ZRrRkM/9AxePxGZEmnoQbwKsQs/bY+jGU2fRzqijxRPoX9ogX5Te/Ko0mQh1slbX
4bL9NIipHPgpNeZRmRUnu4z00UJNGrI/qGaont3eMH1V65WGz9VMYnmCxkmsg45e
skrauB/Ly9DRRZBddDwAQF8RIbpqPsfQTuEjF0sGdYH3LaClGbA/cA==
-----END CERTIFICATE-----

View File

@ -0,0 +1,25 @@
from python:3.11
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install
workdir /root/target
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,46 @@
from nvidia/cuda:11.8.0-devel-ubuntu20.04
from python:3.10
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git \
clang \
cmake \
ffmpeg \
libsm6 \
libxext6 \
ninja-build
env CC /usr/bin/clang
env CXX /usr/bin/clang++
# install llvm10 as required by llvm-lite
run git clone https://github.com/llvm/llvm-project.git -b llvmorg-10.0.1
workdir /llvm-project
# this adds a commit from 12.0.0 that fixes build on newer compilers
run git cherry-pick -n b498303066a63a203d24f739b2d2e0e56dca70d1
run cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
run ninja -C build install # -j8
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install --with=cuda -v
workdir /root/target
env PYTORCH_CUDA_ALLOC_CONF max_split_size_mb:128
env NVIDIA_VISIBLE_DEVICES=all
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,46 @@
from nvidia/cuda:11.8.0-devel-ubuntu20.04
from python:3.11
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git \
clang \
cmake \
ffmpeg \
libsm6 \
libxext6 \
ninja-build
env CC /usr/bin/clang
env CXX /usr/bin/clang++
# install llvm10 as required by llvm-lite
run git clone https://github.com/llvm/llvm-project.git -b llvmorg-10.0.1
workdir /llvm-project
# this adds a commit from 12.0.0 that fixes build on newer compilers
run git cherry-pick -n b498303066a63a203d24f739b2d2e0e56dca70d1
run cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
run ninja -C build install # -j8
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install --with=cuda -v
workdir /root/target
env PYTORCH_CUDA_ALLOC_CONF max_split_size_mb:128
env NVIDIA_VISIBLE_DEVICES=all
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,25 @@
from python:3.11
env DEBIAN_FRONTEND=noninteractive
run apt-get update && apt-get install -y \
git
run curl -sSL https://install.python-poetry.org | python3 -
env PATH "/root/.local/bin:$PATH"
copy . /skynet
workdir /skynet
env POETRY_VIRTUALENVS_PATH /skynet/.venv
run poetry install --with=frontend -v
workdir /root/target
copy docker/entrypoint.sh /entrypoint.sh
entrypoint ["/entrypoint.sh"]
cmd ["skynet", "--help"]

View File

@ -0,0 +1,20 @@
docker build \
-t guilledk/skynet:runtime \
-f docker/Dockerfile.runtime .
docker build \
-t guilledk/skynet:runtime-frontend \
-f docker/Dockerfile.runtime+frontend .
docker build \
-t guilledk/skynet:runtime-cuda-py311 \
-f docker/Dockerfile.runtime+cuda-py311 .
docker build \
-t guilledk/skynet:runtime-cuda \
-f docker/Dockerfile.runtime+cuda-py311 .
docker build \
-t guilledk/skynet:runtime-cuda-py310 \
-f docker/Dockerfile.runtime+cuda-py310 .

View File

@ -0,0 +1,8 @@
#!/bin/sh
export VIRTUAL_ENV='/skynet/.venv'
poetry env use $VIRTUAL_ENV/bin/python
poetry install
exec poetry run "$@"

View File

@ -0,0 +1,5 @@
docker push guilledk/skynet:runtime
docker push guilledk/skynet:runtime-frontend
docker push guilledk/skynet:runtime-cuda
docker push guilledk/skynet:runtime-cuda-py311
docker push guilledk/skynet:runtime-cuda-py310

36
launch_ipfs.sh 100755
View File

@ -0,0 +1,36 @@
#!/bin/bash
name='skynet-ipfs'
peers=("$@")
data_dir="$(pwd)/ipfs-docker-data"
data_target='/data/ipfs'
# Create data directory if it doesn't exist
mkdir -p "$data_dir"
# Run the container
docker run -d \
--name "$name" \
-p 8080:8080/tcp \
-p 4001:4001/tcp \
-p 127.0.0.1:5001:5001/tcp \
--mount type=bind,source="$data_dir",target="$data_target" \
--rm \
ipfs/go-ipfs:latest
# Change ownership
docker exec "$name" chown 1000:1000 -R "$data_target"
# Wait for Daemon to be ready
while read -r log; do
echo "$log"
if [[ "$log" == *"Daemon is ready"* ]]; then
break
fi
done < <(docker logs -f "$name")
# Connect to peers
for peer in "${peers[@]}"; do
docker exec "$name" ipfs swarm connect "$peer" || echo "Error connecting to peer: $peer"
done

3835
poetry.lock generated 100644

File diff suppressed because it is too large Load Diff

2
poetry.toml 100644
View File

@ -0,0 +1,2 @@
[virtualenvs]
in-project = true

67
pyproject.toml 100644
View File

@ -0,0 +1,67 @@
[tool.poetry]
name = 'skynet'
version = '0.1a12'
description = 'Decentralized compute platform'
authors = ['Guillermo Rodriguez <guillermo@telos.net>']
license = 'AGPL'
readme = 'README.md'
[tool.poetry.dependencies]
python = '>=3.10,<3.12'
pytz = '^2023.3.post1'
trio = '^0.22.2'
asks = '^3.0.0'
Pillow = '^10.0.1'
docker = '^6.1.3'
py-leap = {git = 'https://github.com/guilledk/py-leap.git', rev = 'v0.1a14'}
toml = '^0.10.2'
[tool.poetry.group.frontend]
optional = true
[tool.poetry.group.frontend.dependencies]
triopg = {version = '^0.6.0'}
aiohttp = {version = '^3.8.5'}
psycopg2-binary = {version = '^2.9.7'}
pyTelegramBotAPI = {version = '^4.14.0'}
'discord.py' = {version = '^2.3.2'}
[tool.poetry.group.dev]
optional = true
[tool.poetry.group.dev.dependencies]
pdbpp = {version = '^0.10.3'}
pytest = {version = '^7.4.2'}
[tool.poetry.group.cuda]
optional = true
[tool.poetry.group.cuda.dependencies]
torch = {version = '2.0.1+cu118', source = 'torch'}
scipy = {version = '^1.11.2'}
numba = {version = '0.57.0'}
quart = {version = '^0.19.3'}
triton = {version = '2.0.0', source = 'torch'}
basicsr = {version = '^1.4.2'}
xformers = {version = '^0.0.22'}
hypercorn = {version = '^0.14.4'}
diffusers = {version = '^0.21.2'}
realesrgan = {version = '^0.3.0'}
quart-trio = {version = '^0.11.0'}
torchvision = {version = '0.15.2+cu118', source = 'torch'}
accelerate = {version = '^0.23.0'}
transformers = {version = '^4.33.2'}
huggingface-hub = {version = '^0.17.3'}
invisible-watermark = {version = '^0.2.0'}
[[tool.poetry.source]]
name = 'torch'
url = 'https://download.pytorch.org/whl/cu118'
priority = 'explicit'
[build-system]
requires = ['poetry-core', 'cython']
build-backend = 'poetry.core.masonry.api'
[tool.poetry.scripts]
skynet = 'skynet.cli:skynet'

View File

@ -1,4 +1,4 @@
[pytest] [pytest]
log_cli = True log_cli = True
log_level = info log_level = info
trio_mode = true trio_mode = True

View File

@ -1,8 +0,0 @@
scipy
triton
accelerate
transformers
huggingface_hub
diffusers[torch]
torch==1.13.0+cu117
--extra-index-url https://download.pytorch.org/whl/cu117

View File

@ -1 +0,0 @@
git+https://github.com/facebookresearch/xformers.git@main#egg=xformers

View File

@ -1,2 +0,0 @@
basicsr
realesrgan

View File

@ -1,6 +0,0 @@
pdbpp
pytest
pytest-trio
psycopg2-binary
git+https://github.com/guilledk/pytest-dockerctl.git@host_network#egg=pytest-dockerctl

View File

@ -1,11 +0,0 @@
trio
pynng
numpy
Pillow
triopg
aiohttp
msgspec
protobuf
pyOpenSSL
trio_asyncio
pyTelegramBotAPI

View File

@ -1,44 +0,0 @@
#!/usr/bin/python
'''Self signed x509 certificate generator
can look at generated file using openssl:
openssl x509 -inform pem -in selfsigned.crt -noout -text'''
import sys
from OpenSSL import crypto, SSL
from skynet.constants import DEFAULT_CERTS_DIR
def input_or_skip(txt, default):
i = input(f'[default: {default}]: {txt}')
if len(i) == 0:
return default
else:
return i
if __name__ == '__main__':
# create a key pair
k = crypto.PKey()
k.generate_key(crypto.TYPE_RSA, 4096)
# create a self-signed cert
cert = crypto.X509()
cert.get_subject().C = input('country name two char ISO code (example: US): ')
cert.get_subject().ST = input('state or province name (example: Texas): ')
cert.get_subject().L = input('locality name (example: Dallas): ')
cert.get_subject().O = input('organization name: ')
cert.get_subject().OU = input_or_skip('organizational unit name: ', 'none')
cert.get_subject().CN = input('common name: ')
cert.get_subject().emailAddress = input('email address: ')
cert.set_serial_number(int(input_or_skip('numberic serial number: ', 0)))
cert.gmtime_adj_notBefore(int(input_or_skip('amount of seconds until cert is valid: ', 0)))
cert.gmtime_adj_notAfter(int(input_or_skip('amount of seconds until cert expires: ', 10*365*24*60*60)))
cert.set_issuer(cert.get_subject())
cert.set_pubkey(k)
cert.sign(k, 'sha512')
with open(f'{DEFAULT_CERTS_DIR}/{sys.argv[1]}.cert', "wt") as f:
f.write(crypto.dump_certificate(crypto.FILETYPE_PEM, cert).decode("utf-8"))
with open(f'{DEFAULT_CERTS_DIR}/{sys.argv[1]}.key', "wt") as f:
f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, k).decode("utf-8"))

View File

@ -1,21 +0,0 @@
from setuptools import setup, find_packages
from skynet.constants import VERSION
setup(
name='skynet',
version=VERSION,
description='Decentralized compute platform',
author='Guillermo Rodriguez',
author_email='guillermo@telos.net',
packages=find_packages(),
entry_points={
'console_scripts': [
'skynet = skynet.cli:skynet',
'txt2img = skynet.cli:txt2img',
'img2img = skynet.cli:img2img',
'upscale = skynet.cli:upscale'
]
},
install_requires=['click']
)

View File

@ -0,0 +1,47 @@
# config sections are optional, depending on which services
# you wish to run
[skynet.dgpu]
account = 'testworkerX'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'
hyperion_url = 'https://testnet.skygpu.net'
ipfs_gateway_url = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
ipfs_url = 'http://127.0.0.1:5001'
hf_home = 'hf_home'
hf_token = 'hf_XxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXxXx'
auto_withdraw = true
non_compete = []
api_bind = '127.0.0.1:42690'
[skynet.telegram]
account = 'telegram'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'
hyperion_url = 'https://testnet.skygpu.net'
ipfs_gateway_url = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
ipfs_url = 'http://127.0.0.1:5001'
token = 'XXXXXXXXXX:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
[skynet.discord]
account = 'discord'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'
hyperion_url = 'https://testnet.skygpu.net'
ipfs_gateway_url = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
ipfs_url = 'http://127.0.0.1:5001'
token = 'XXXXXXXXXX:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
[skynet.pinner]
hyperion_url = 'https://testnet.skygpu.net'
ipfs_url = 'http://127.0.0.1:5001'
[skynet.user]
account = 'testuser'
permission = 'active'
key = '5Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
node_url = 'https://testnet.skygpu.net'

View File

@ -1,504 +0,0 @@
#!/usr/bin/python
import time
import json
import uuid
import zlib
import logging
import traceback
from uuid import UUID
from pathlib import Path
from functools import partial
from contextlib import asynccontextmanager as acm
from collections import OrderedDict
import trio
import pynng
import trio_asyncio
from pynng import TLSConfig
from OpenSSL.crypto import (
load_privatekey,
load_certificate,
FILETYPE_PEM
)
from .db import *
from .constants import *
from .protobuf import *
class SkynetDGPUOffline(BaseException):
...
class SkynetDGPUOverloaded(BaseException):
...
class SkynetDGPUComputeError(BaseException):
...
class SkynetShutdownRequested(BaseException):
...
@acm
async def open_rpc_service(sock, dgpu_bus, db_pool, tls_whitelist, tls_key):
nodes = OrderedDict()
wip_reqs = {}
fin_reqs = {}
heartbeats = {}
next_worker: Optional[int] = None
security = len(tls_whitelist) > 0
def connect_node(uid):
nonlocal next_worker
nodes[uid] = {
'task': None
}
logging.info(f'dgpu online: {uid}')
if not next_worker:
next_worker = 0
def disconnect_node(uid):
nonlocal next_worker
if uid not in nodes:
return
i = list(nodes.keys()).index(uid)
del nodes[uid]
if i < next_worker:
next_worker -= 1
if len(nodes) == 0:
logging.info('nw: None')
next_worker = None
logging.warning(f'dgpu offline: {uid}')
def is_worker_busy(nid: str):
return nodes[nid]['task'] != None
def are_all_workers_busy():
for nid in nodes.keys():
if not is_worker_busy(nid):
return False
return True
def get_next_worker():
nonlocal next_worker
logging.info('get next_worker called')
logging.info(f'pre next_worker: {next_worker}')
if next_worker == None:
raise SkynetDGPUOffline('No workers connected, try again later')
if are_all_workers_busy():
raise SkynetDGPUOverloaded('All workers are busy at the moment')
nid = list(nodes.keys())[next_worker]
while is_worker_busy(nid):
next_worker += 1
if next_worker >= len(nodes):
next_worker = 0
nid = list(nodes.keys())[next_worker]
next_worker += 1
if next_worker >= len(nodes):
next_worker = 0
logging.info(f'post next_worker: {next_worker}')
return nid
async def dgpu_heartbeat_service():
nonlocal heartbeats
while True:
await trio.sleep(60)
rid = uuid.uuid4().hex
beat_msg = DGPUBusMessage(
rid=rid,
nid='',
method='heartbeat'
)
heartbeats.clear()
heartbeats[rid] = int(time.time() * 1000)
await dgpu_bus.asend(beat_msg.SerializeToString())
logging.info('sent heartbeat')
async def dgpu_bus_streamer():
nonlocal wip_reqs, fin_reqs, heartbeats
while True:
raw_msg = await dgpu_bus.arecv()
logging.info(f'streamer got {len(raw_msg)} bytes.')
msg = DGPUBusMessage()
msg.ParseFromString(raw_msg)
if security:
verify_protobuf_msg(msg, tls_whitelist[msg.auth.cert])
rid = msg.rid
if msg.method == 'heartbeat':
sent_time = heartbeats[rid]
delta = msg.params['time'] - sent_time
logging.info(f'got heartbeat reply from {msg.nid}, ping: {delta}')
continue
if rid not in wip_reqs:
continue
if msg.method == 'binary-reply':
logging.info('bin reply, recv extra data')
raw_img = await dgpu_bus.arecv()
msg = (msg, raw_img)
fin_reqs[rid] = msg
event = wip_reqs[rid]
event.set()
del wip_reqs[rid]
async def dgpu_stream_one_img(req: DiffusionParameters, img_buf=None):
nonlocal wip_reqs, fin_reqs, next_worker
nid = get_next_worker()
idx = list(nodes.keys()).index(nid)
logging.info(f'dgpu_stream_one_img {idx}/{len(nodes)} {nid}')
rid = uuid.uuid4().hex
ack_event = trio.Event()
img_event = trio.Event()
wip_reqs[rid] = ack_event
nodes[nid]['task'] = rid
dgpu_req = DGPUBusMessage(
rid=rid,
nid=nid,
method='diffuse')
dgpu_req.params.update(req.to_dict())
if security:
dgpu_req.auth.cert = 'skynet'
dgpu_req.auth.sig = sign_protobuf_msg(dgpu_req, tls_key)
msg = dgpu_req.SerializeToString()
if img_buf:
logging.info(f'sending img of size {len(img_buf)} as attachment')
logging.info(img_buf[:10])
msg = f'BINEXT%$%$'.encode() + msg + b'%$%$' + img_buf
await dgpu_bus.asend(msg)
with trio.move_on_after(4):
await ack_event.wait()
logging.info(f'ack event: {ack_event.is_set()}')
if not ack_event.is_set():
disconnect_node(nid)
raise SkynetDGPUOffline('dgpu failed to acknowledge request')
ack_msg = fin_reqs[rid]
if 'ack' not in ack_msg.params:
disconnect_node(nid)
raise SkynetDGPUOffline('dgpu failed to acknowledge request')
wip_reqs[rid] = img_event
with trio.move_on_after(30):
await img_event.wait()
logging.info(f'img event: {ack_event.is_set()}')
if not img_event.is_set():
disconnect_node(nid)
raise SkynetDGPUComputeError('30 seconds timeout while processing request')
nodes[nid]['task'] = None
resp = fin_reqs[rid]
del fin_reqs[rid]
if isinstance(resp, tuple):
meta, img = resp
return rid, img, meta.params
raise SkynetDGPUComputeError(MessageToDict(resp.params))
async def handle_user_request(rpc_ctx, req):
try:
async with db_pool.acquire() as conn:
user = await get_or_create_user(conn, req.uid)
result = {}
match req.method:
case 'txt2img':
logging.info('txt2img')
user_config = {**(await get_user_config(conn, user))}
del user_config['id']
user_config.update(MessageToDict(req.params))
req = DiffusionParameters(**user_config, image=False)
rid, img, meta = await dgpu_stream_one_img(req)
logging.info(f'done streaming {rid}')
result = {
'id': rid,
'img': img.hex(),
'meta': meta
}
await update_user_stats(conn, user, last_prompt=user_config['prompt'])
logging.info('updated user stats.')
case 'img2img':
logging.info('img2img')
user_config = {**(await get_user_config(conn, user))}
del user_config['id']
params = MessageToDict(req.params)
img_buf = bytes.fromhex(params['img'])
del params['img']
user_config.update(params)
req = DiffusionParameters(**user_config, image=True)
if not req.image:
raise AssertionError('Didn\'t enable image flag for img2img?')
rid, img, meta = await dgpu_stream_one_img(req, img_buf=img_buf)
logging.info(f'done streaming {rid}')
result = {
'id': rid,
'img': img.hex(),
'meta': meta
}
await update_user_stats(conn, user, last_prompt=user_config['prompt'])
logging.info('updated user stats.')
case 'redo':
logging.info('redo')
user_config = {**(await get_user_config(conn, user))}
del user_config['id']
prompt = await get_last_prompt_of(conn, user)
if prompt:
req = DiffusionParameters(
prompt=prompt,
**user_config,
image=False
)
rid, img, meta = await dgpu_stream_one_img(req)
result = {
'id': rid,
'img': img.hex(),
'meta': meta
}
await update_user_stats(conn, user)
logging.info('updated user stats.')
else:
result = {
'error': 'skynet_no_last_prompt',
'message': 'No prompt to redo, do txt2img first'
}
case 'config':
logging.info('config')
if req.params['attr'] in CONFIG_ATTRS:
logging.info(f'update: {req.params}')
await update_user_config(
conn, user, req.params['attr'], req.params['val'])
logging.info('done')
else:
logging.warning(f'{req.params["attr"]} not in {CONFIG_ATTRS}')
case 'stats':
logging.info('stats')
generated, joined, role = await get_user_stats(conn, user)
result = {
'generated': generated,
'joined': joined.strftime(DATE_FORMAT),
'role': role
}
case _:
logging.warn('unknown method')
except SkynetDGPUOffline as e:
result = {
'error': 'skynet_dgpu_offline',
'message': str(e)
}
except SkynetDGPUOverloaded as e:
result = {
'error': 'skynet_dgpu_overloaded',
'message': str(e),
'nodes': len(nodes)
}
except SkynetDGPUComputeError as e:
result = {
'error': 'skynet_dgpu_compute_error',
'message': str(e)
}
except BaseException as e:
traceback.print_exception(type(e), e, e.__traceback__)
result = {
'error': 'skynet_internal_error',
'message': str(e)
}
resp = SkynetRPCResponse()
resp.result.update(result)
if security:
resp.auth.cert = 'skynet'
resp.auth.sig = sign_protobuf_msg(resp, tls_key)
logging.info('sending response')
await rpc_ctx.asend(resp.SerializeToString())
rpc_ctx.close()
logging.info('done')
async def request_service(n):
nonlocal next_worker
while True:
ctx = sock.new_context()
req = SkynetRPCRequest()
req.ParseFromString(await ctx.arecv())
if security:
if req.auth.cert not in tls_whitelist:
logging.warning(
f'{req.cert} not in tls whitelist and security=True')
continue
try:
verify_protobuf_msg(req, tls_whitelist[req.auth.cert])
except ValueError:
logging.warning(
f'{req.cert} sent an unauthenticated msg with security=True')
continue
result = {}
match req.method:
case 'skynet_shutdown':
raise SkynetShutdownRequested
case 'dgpu_online':
connect_node(req.uid)
case 'dgpu_offline':
disconnect_node(req.uid)
case 'dgpu_workers':
result = len(nodes)
case 'dgpu_next':
result = next_worker
case 'heartbeat':
logging.info('beat')
result = {'time': time.time()}
case _:
n.start_soon(
handle_user_request, ctx, req)
continue
resp = SkynetRPCResponse()
resp.result.update({'ok': result})
if security:
resp.auth.cert = 'skynet'
resp.auth.sig = sign_protobuf_msg(resp, tls_key)
await ctx.asend(resp.SerializeToString())
ctx.close()
async with trio.open_nursery() as n:
n.start_soon(dgpu_bus_streamer)
n.start_soon(dgpu_heartbeat_service)
n.start_soon(request_service, n)
logging.info('starting rpc service')
yield
logging.info('stopping rpc service')
n.cancel_scope.cancel()
@acm
async def run_skynet(
db_user: str = DB_USER,
db_pass: str = DB_PASS,
db_host: str = DB_HOST,
rpc_address: str = DEFAULT_RPC_ADDR,
dgpu_address: str = DEFAULT_DGPU_ADDR,
security: bool = True
):
logging.basicConfig(level=logging.INFO)
logging.info('skynet is starting')
tls_config = None
if security:
# load tls certs
certs_dir = Path(DEFAULT_CERTS_DIR).resolve()
tls_key_data = (certs_dir / DEFAULT_CERT_SKYNET_PRIV).read_text()
tls_key = load_privatekey(FILETYPE_PEM, tls_key_data)
tls_cert_data = (certs_dir / DEFAULT_CERT_SKYNET_PUB).read_text()
tls_cert = load_certificate(FILETYPE_PEM, tls_cert_data)
tls_whitelist = {}
for cert_path in (certs_dir / 'whitelist').glob('*.cert'):
tls_whitelist[cert_path.stem] = load_certificate(
FILETYPE_PEM, cert_path.read_text())
cert_start = tls_cert_data.index('\n') + 1
logging.info(f'tls_cert: {tls_cert_data[cert_start:cert_start+64]}...')
logging.info(f'tls_whitelist len: {len(tls_whitelist)}')
rpc_address = 'tls+' + rpc_address
dgpu_address = 'tls+' + dgpu_address
tls_config = TLSConfig(
TLSConfig.MODE_SERVER,
own_key_string=tls_key_data,
own_cert_string=tls_cert_data)
with (
pynng.Rep0(recv_max_size=0) as rpc_sock,
pynng.Bus0(recv_max_size=0) as dgpu_bus
):
async with open_database_connection(
db_user, db_pass, db_host) as db_pool:
logging.info('connected to db.')
if security:
rpc_sock.tls_config = tls_config
dgpu_bus.tls_config = tls_config
rpc_sock.listen(rpc_address)
dgpu_bus.listen(dgpu_address)
try:
async with open_rpc_service(
rpc_sock, dgpu_bus, db_pool, tls_whitelist, tls_key):
yield
except SkynetShutdownRequested:
...
logging.info('disconnected from db.')

543
skynet/cli.py 100644 → 100755
View File

@ -1,25 +1,17 @@
#!/usr/bin/python #!/usr/bin/python
import importlib.util
torch_enabled = importlib.util.find_spec('torch') != None
import os
import json import json
import logging
import random
from typing import Optional
from functools import partial from functools import partial
import trio
import click import click
import trio_asyncio
if torch_enabled: from leap.sugar import Name, asset_from_str
from . import utils
from .dgpu import open_dgpu_node
from .brain import run_skynet from .config import *
from .constants import ALGOS, DEFAULT_RPC_ADDR, DEFAULT_DGPU_ADDR from .constants import *
from .frontend.telegram import run_skynet_telegram
@click.group() @click.group()
@ -38,11 +30,16 @@ def skynet(*args, **kwargs):
@click.option('--steps', '-s', default=26) @click.option('--steps', '-s', default=26)
@click.option('--seed', '-S', default=None) @click.option('--seed', '-S', default=None)
def txt2img(*args, **kwargs): def txt2img(*args, **kwargs):
assert 'HF_TOKEN' in os.environ from . import utils
utils.txt2img(os.environ['HF_TOKEN'], **kwargs)
config = load_skynet_toml()
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
utils.txt2img(hf_token, **kwargs)
@click.command() @click.command()
@click.option('--model', '-m', default='midj') @click.option('--model', '-m', default=list(MODELS.keys())[0])
@click.option( @click.option(
'--prompt', '-p', default='a red old tractor in a sunny wheat field') '--prompt', '-p', default='a red old tractor in a sunny wheat field')
@click.option('--input', '-i', default='input.png') @click.option('--input', '-i', default='input.png')
@ -52,9 +49,13 @@ def txt2img(*args, **kwargs):
@click.option('--steps', '-s', default=26) @click.option('--steps', '-s', default=26)
@click.option('--seed', '-S', default=None) @click.option('--seed', '-S', default=None)
def img2img(model, prompt, input, output, strength, guidance, steps, seed): def img2img(model, prompt, input, output, strength, guidance, steps, seed):
assert 'HF_TOKEN' in os.environ from . import utils
config = load_skynet_toml()
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
utils.img2img( utils.img2img(
os.environ['HF_TOKEN'], hf_token,
model=model, model=model,
prompt=prompt, prompt=prompt,
img_path=input, img_path=input,
@ -70,101 +71,451 @@ def img2img(model, prompt, input, output, strength, guidance, steps, seed):
@click.option('--output', '-o', default='output.png') @click.option('--output', '-o', default='output.png')
@click.option('--model', '-m', default='weights/RealESRGAN_x4plus.pth') @click.option('--model', '-m', default='weights/RealESRGAN_x4plus.pth')
def upscale(input, output, model): def upscale(input, output, model):
from . import utils
utils.upscale( utils.upscale(
img_path=input, img_path=input,
output=output, output=output,
model_path=model) model_path=model)
@skynet.command()
def download():
from . import utils
config = load_skynet_toml()
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
utils.download_all_models(hf_token, hf_home)
@skynet.command()
@click.option(
'--reward', '-r', default='20.0000 GPU')
@click.option('--jobs', '-j', default=1)
@click.option('--model', '-m', default='stabilityai/stable-diffusion-xl-base-1.0')
@click.option(
'--prompt', '-p', default='a red old tractor in a sunny wheat field')
@click.option('--output', '-o', default='output.png')
@click.option('--width', '-w', default=1024)
@click.option('--height', '-h', default=1024)
@click.option('--guidance', '-g', default=10)
@click.option('--step', '-s', default=26)
@click.option('--seed', '-S', default=None)
@click.option('--upscaler', '-U', default='x4')
@click.option('--binary_data', '-b', default='')
@click.option('--strength', '-Z', default=None)
def enqueue(
reward: str,
jobs: int,
**kwargs
):
import trio
from leap.cleos import CLEOS
config = load_skynet_toml()
key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
binary = kwargs['binary_data']
if not kwargs['strength']:
if binary:
raise ValueError('strength -Z param required if binary data passed')
del kwargs['strength']
else:
kwargs['strength'] = float(kwargs['strength'])
async def enqueue_n_jobs():
for i in range(jobs):
if not kwargs['seed']:
kwargs['seed'] = random.randint(0, 10e9)
req = json.dumps({
'method': 'diffuse',
'params': kwargs
})
res = await cleos.a_push_action(
'telos.gpu',
'enqueue',
{
'user': Name(account),
'request_body': req,
'binary_data': binary,
'reward': asset_from_str(reward),
'min_verification': 1
},
account, key, permission,
)
print(res)
trio.run(enqueue_n_jobs)
@skynet.command()
@click.option('--loglevel', '-l', default='INFO', help='Logging level')
def clean(
loglevel: str,
):
import trio
from leap.cleos import CLEOS
config = load_skynet_toml()
key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
logging.basicConfig(level=loglevel)
cleos = CLEOS(None, None, url=node_url, remote=node_url)
trio.run(
partial(
cleos.a_push_action,
'telos.gpu',
'clean',
{},
account, key, permission=permission
)
)
@skynet.command()
def queue():
import requests
config = load_skynet_toml()
node_url = load_key(config, 'skynet.user.node_url')
resp = requests.post(
f'{node_url}/v1/chain/get_table_rows',
json={
'code': 'telos.gpu',
'table': 'queue',
'scope': 'telos.gpu',
'json': True
}
)
print(json.dumps(resp.json(), indent=4))
@skynet.command()
@click.argument('request-id')
def status(request_id: int):
import requests
config = load_skynet_toml()
node_url = load_key(config, 'skynet.user.node_url')
resp = requests.post(
f'{node_url}/v1/chain/get_table_rows',
json={
'code': 'telos.gpu',
'table': 'status',
'scope': request_id,
'json': True
}
)
print(json.dumps(resp.json(), indent=4))
@skynet.command()
@click.argument('request-id')
def dequeue(request_id: int):
import trio
from leap.cleos import CLEOS
config = load_skynet_toml()
key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
res = trio.run(
partial(
cleos.a_push_action,
'telos.gpu',
'dequeue',
{
'user': Name(account),
'request_id': int(request_id),
},
account, key, permission=permission
)
)
print(res)
@skynet.command()
@click.option(
'--token-contract', '-c', default='eosio.token')
@click.option(
'--token-symbol', '-S', default='4,GPU')
def config(
token_contract: str,
token_symbol: str
):
import trio
from leap.cleos import CLEOS
config = load_skynet_toml()
key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
res = trio.run(
partial(
cleos.a_push_action,
'telos.gpu',
'config',
{
'token_contract': token_contract,
'token_symbol': token_symbol,
},
account, key, permission=permission
)
)
print(res)
@skynet.command()
@click.argument('quantity')
def deposit(quantity: str):
import trio
from leap.cleos import CLEOS
config = load_skynet_toml()
key = load_key(config, 'skynet.user.key')
account = load_key(config, 'skynet.user.account')
permission = load_key(config, 'skynet.user.permission')
node_url = load_key(config, 'skynet.user.node_url')
cleos = CLEOS(None, None, url=node_url, remote=node_url)
res = trio.run(
partial(
cleos.a_push_action,
'telos.gpu',
'transfer',
{
'sender': Name(account),
'recipient': Name('telos.gpu'),
'amount': asset_from_str(quantity),
'memo': f'{account} transferred {quantity} to telos.gpu'
},
account, key, permission=permission
)
)
print(res)
@skynet.group() @skynet.group()
def run(*args, **kwargs): def run(*args, **kwargs):
pass pass
@run.command()
def db():
from .db import open_new_database
logging.basicConfig(level=logging.INFO)
with open_new_database(cleanup=False) as db_params:
container, passwd, host = db_params
logging.info(('skynet', passwd, host))
@run.command() @run.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level') def nodeos():
from .nodeos import open_nodeos
logging.basicConfig(filename='skynet-nodeos.log', level=logging.INFO)
with open_nodeos(cleanup=False):
...
@run.command()
@click.option('--loglevel', '-l', default='INFO', help='Logging level')
@click.option( @click.option(
'--host', '-H', default=DEFAULT_RPC_ADDR) '--config-path', '-c', default=DEFAULT_CONFIG_PATH)
@click.option( def dgpu(
'--host-dgpu', '-D', default=DEFAULT_DGPU_ADDR) loglevel: str,
config_path: str
):
import trio
from .dgpu import open_dgpu_node
logging.basicConfig(level=loglevel)
config = load_skynet_toml(file_path=config_path)
hf_token = load_key(config, 'skynet.dgpu.hf_token')
hf_home = load_key(config, 'skynet.dgpu.hf_home')
set_hf_vars(hf_token, hf_home)
assert 'skynet' in config
assert 'dgpu' in config['skynet']
trio.run(open_dgpu_node, config['skynet']['dgpu'])
@run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option( @click.option(
'--db-host', '-h', default='localhost:5432') '--db-host', '-h', default='localhost:5432')
@click.option( @click.option(
'--db-pass', '-p', default='password') '--db-user', '-u', default='skynet')
def brain(
loglevel: str,
host: str,
host_dgpu: str,
db_host: str,
db_pass: str
):
async def _run_skynet():
async with run_skynet(
db_host=db_host,
db_pass=db_pass,
rpc_address=host,
dgpu_address=host_dgpu
):
await trio.sleep_forever()
trio_asyncio.run(_run_skynet)
@run.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option( @click.option(
'--uid', '-u', required=True) '--db-pass', '-u', default='password')
@click.option(
'--key', '-k', default='dgpu')
@click.option(
'--cert', '-c', default='whitelist/dgpu')
@click.option(
'--algos', '-a', default=json.dumps(['midj']))
@click.option(
'--rpc', '-r', default=DEFAULT_RPC_ADDR)
@click.option(
'--dgpu', '-d', default=DEFAULT_DGPU_ADDR)
def dgpu(
loglevel: str,
uid: str,
key: str,
cert: str,
algos: str,
rpc: str,
dgpu: str
):
trio.run(
partial(
open_dgpu_node,
cert,
uid,
key_name=key,
rpc_address=rpc,
dgpu_address=dgpu,
initial_algos=json.loads(algos)
))
@run.command()
@click.option('--loglevel', '-l', default='warning', help='Logging level')
@click.option(
'--key', '-k', default='telegram-frontend')
@click.option(
'--cert', '-c', default='whitelist/telegram-frontend')
@click.option(
'--rpc', '-r', default=DEFAULT_RPC_ADDR)
def telegram( def telegram(
loglevel: str, loglevel: str,
key: str, db_host: str,
cert: str, db_user: str,
rpc: str db_pass: str
): ):
assert 'TG_TOKEN' in os.environ import asyncio
trio_asyncio.run( from .frontend.telegram import SkynetTelegramFrontend
partial(
run_skynet_telegram, logging.basicConfig(level=loglevel)
os.environ['TG_TOKEN'],
key_name=key, config = load_skynet_toml()
cert_name=cert, tg_token = load_key(config, 'skynet.telegram.tg_token')
rpc_address=rpc
)) key = load_key(config, 'skynet.telegram.key')
account = load_key(config, 'skynet.telegram.account')
permission = load_key(config, 'skynet.telegram.permission')
node_url = load_key(config, 'skynet.telegram.node_url')
hyperion_url = load_key(config, 'skynet.telegram.hyperion_url')
try:
ipfs_gateway_url = load_key(config, 'skynet.telegram.ipfs_gateway_url')
except ConfigParsingError:
ipfs_gateway_url = None
ipfs_url = load_key(config, 'skynet.telegram.ipfs_url')
try:
explorer_domain = load_key(config, 'skynet.telegram.explorer_domain')
except ConfigParsingError:
explorer_domain = DEFAULT_EXPLORER_DOMAIN
try:
ipfs_domain = load_key(config, 'skynet.telegram.ipfs_domain')
except ConfigParsingError:
ipfs_domain = DEFAULT_IPFS_DOMAIN
async def _async_main():
frontend = SkynetTelegramFrontend(
tg_token,
account,
permission,
node_url,
hyperion_url,
db_host, db_user, db_pass,
ipfs_url,
remote_ipfs_node=ipfs_gateway_url,
key=key,
explorer_domain=explorer_domain,
ipfs_domain=ipfs_domain
)
async with frontend.open():
await frontend.bot.infinity_polling()
asyncio.run(_async_main())
@run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option(
'--db-host', '-h', default='localhost:5432')
@click.option(
'--db-user', '-u', default='skynet')
@click.option(
'--db-pass', '-u', default='password')
def discord(
loglevel: str,
db_host: str,
db_user: str,
db_pass: str
):
import asyncio
from .frontend.discord import SkynetDiscordFrontend
logging.basicConfig(level=loglevel)
config = load_skynet_toml()
dc_token = load_key(config, 'skynet.discord.dc_token')
key = load_key(config, 'skynet.discord.key')
account = load_key(config, 'skynet.discord.account')
permission = load_key(config, 'skynet.discord.permission')
node_url = load_key(config, 'skynet.discord.node_url')
hyperion_url = load_key(config, 'skynet.discord.hyperion_url')
ipfs_gateway_url = load_key(config, 'skynet.discord.ipfs_gateway_url')
ipfs_url = load_key(config, 'skynet.discord.ipfs_url')
try:
explorer_domain = load_key(config, 'skynet.discord.explorer_domain')
except ConfigParsingError:
explorer_domain = DEFAULT_EXPLORER_DOMAIN
try:
ipfs_domain = load_key(config, 'skynet.discord.ipfs_domain')
except ConfigParsingError:
ipfs_domain = DEFAULT_IPFS_DOMAIN
async def _async_main():
frontend = SkynetDiscordFrontend(
# dc_token,
account,
permission,
node_url,
hyperion_url,
db_host, db_user, db_pass,
ipfs_url,
remote_ipfs_node=ipfs_gateway_url,
key=key,
explorer_domain=explorer_domain,
ipfs_domain=ipfs_domain
)
async with frontend.open():
await frontend.bot.start(dc_token)
asyncio.run(_async_main())
@run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level')
@click.option('--name', '-n', default='skynet-ipfs', help='container name')
@click.option('--peer', '-p', default=(), help='connect to peer', multiple=True, type=str)
def ipfs(loglevel, name, peer):
from skynet.ipfs.docker import open_ipfs_node
logging.basicConfig(level=loglevel)
with open_ipfs_node(name=name, peers=peer):
...
@run.command()
@click.option('--loglevel', '-l', default='INFO', help='logging level')
def pinner(loglevel):
import trio
from leap.hyperion import HyperionAPI
from .ipfs import AsyncIPFSHTTP
from .ipfs.pinner import SkynetPinner
config = load_skynet_toml()
hyperion_url = load_key(config, 'skynet.pinner.hyperion_url')
ipfs_url = load_key(config, 'skynet.pinner.ipfs_url')
logging.basicConfig(level=loglevel)
ipfs_node = AsyncIPFSHTTP(ipfs_url)
hyperion = HyperionAPI(hyperion_url)
pinner = SkynetPinner(hyperion, ipfs_node)
trio.run(pinner.pin_forever)

33
skynet/config.py 100755
View File

@ -0,0 +1,33 @@
#!/usr/bin/python
import os
import toml
from pathlib import Path
from .constants import DEFAULT_CONFIG_PATH
class ConfigParsingError(BaseException):
...
def load_skynet_toml(file_path=DEFAULT_CONFIG_PATH) -> dict:
config = toml.load(file_path)
return config
def load_key(config: dict, key: str) -> str:
for skey in key.split('.'):
if skey not in config:
conf_keys = [k for k in config]
raise ConfigParsingError(f'key \"{skey}\" not in {conf_keys}')
config = config[skey]
return config
def set_hf_vars(hf_token: str, hf_home: str):
os.environ['HF_TOKEN'] = hf_token
os.environ['HF_HOME'] = hf_home

123
skynet/constants.py 100644 → 100755
View File

@ -1,26 +1,36 @@
#!/usr/bin/python #!/usr/bin/python
VERSION = '0.1a8' VERSION = '0.1a12'
DOCKER_RUNTIME_CUDA = 'skynet:runtime-cuda' DOCKER_RUNTIME_CUDA = 'skynet:runtime-cuda'
DB_HOST = 'localhost:5432' MODELS = {
DB_USER = 'skynet' 'prompthero/openjourney': {'short': 'midj', 'mem': 6},
DB_PASS = 'password' 'runwayml/stable-diffusion-v1-5': {'short': 'stable', 'mem': 6},
DB_NAME = 'skynet' 'stabilityai/stable-diffusion-2-1-base': {'short': 'stable2', 'mem': 6},
'snowkidy/stable-diffusion-xl-base-0.9': {'short': 'stablexl0.9', 'mem': 8.3},
'Linaqruf/anything-v3.0': {'short': 'hdanime', 'mem': 6},
'hakurei/waifu-diffusion': {'short': 'waifu', 'mem': 6},
'nitrosocke/Ghibli-Diffusion': {'short': 'ghibli', 'mem': 6},
'dallinmackay/Van-Gogh-diffusion': {'short': 'van-gogh', 'mem': 6},
'lambdalabs/sd-pokemon-diffusers': {'short': 'pokemon', 'mem': 6},
'Envvi/Inkpunk-Diffusion': {'short': 'ink', 'mem': 6},
'nousr/robo-diffusion': {'short': 'robot', 'mem': 6},
ALGOS = { # default is always last
'midj': 'prompthero/openjourney', 'stabilityai/stable-diffusion-xl-base-1.0': {'short': 'stablexl', 'mem': 8.3},
'stable': 'runwayml/stable-diffusion-v1-5',
'hdanime': 'Linaqruf/anything-v3.0',
'waifu': 'hakurei/waifu-diffusion',
'ghibli': 'nitrosocke/Ghibli-Diffusion',
'van-gogh': 'dallinmackay/Van-Gogh-diffusion',
'pokemon': 'lambdalabs/sd-pokemon-diffusers',
'ink': 'Envvi/Inkpunk-Diffusion',
'robot': 'nousr/robo-diffusion'
} }
SHORT_NAMES = [
model_info['short']
for model_info in MODELS.values()
]
def get_model_by_shortname(short: str):
for model, info in MODELS.items():
if short == info['short']:
return model
N = '\n' N = '\n'
HELP_TEXT = f''' HELP_TEXT = f'''
test art bot v{VERSION} test art bot v{VERSION}
@ -29,6 +39,7 @@ commands work on a user per user basis!
config is individual to each user! config is individual to each user!
/txt2img TEXT - request an image based on a prompt /txt2img TEXT - request an image based on a prompt
/img2img <attach_image> TEXT - request an image base on an image and a prompt
/redo - redo last command (only works for txt2img for now!) /redo - redo last command (only works for txt2img for now!)
@ -40,18 +51,22 @@ config is individual to each user!
/donate - see donation info /donate - see donation info
/config algo NAME - select AI to use one of: /config algo NAME - select AI to use one of:
/config model NAME - select AI to use one of:
{N.join(ALGOS.keys())} {N.join(SHORT_NAMES)}
/config step NUMBER - set amount of iterations /config step NUMBER - set amount of iterations
/config seed NUMBER - set the seed, deterministic results! /config seed [auto|NUMBER] - set the seed, deterministic results!
/config size WIDTH HEIGHT - set size in pixels /config width NUMBER - set horizontal size in pixels
/config height NUMBER - set vertical size in pixels
/config upscaler [off/x4] - enable/disable x4 size upscaler
/config guidance NUMBER - prompt text importance /config guidance NUMBER - prompt text importance
/config strength NUMBER - importance of the input image for img2img
''' '''
UNKNOWN_CMD_TEXT = 'Unknown command! Try sending \"/help\"' UNKNOWN_CMD_TEXT = 'Unknown command! Try sending \"/help\"'
DONATION_INFO = '0xf95335682DF281FFaB7E104EB87B69625d9622B6\ngoal: 25/650usd' DONATION_INFO = '0xf95335682DF281FFaB7E104EB87B69625d9622B6\ngoal: 0.0465/1.0000 ETH'
COOL_WORDS = [ COOL_WORDS = [
'cyberpunk', 'cyberpunk',
@ -76,6 +91,28 @@ COOL_WORDS = [
'michelangelo' 'michelangelo'
] ]
CLEAN_COOL_WORDS = [
'cyberpunk',
'soviet propaganda poster',
'rastafari',
'cannabis',
'art deco',
'H R Giger Necronom IV',
'dimethyltryptamine',
'lysergic',
'psilocybin',
'trippy',
'lucy in the sky with diamonds',
'fractal',
'da vinci',
'pencil illustration',
'blueprint',
'internal diagram',
'baroque',
'the last judgment',
'michelangelo'
]
HELP_TOPICS = { HELP_TOPICS = {
'step': ''' 'step': '''
Diffusion models are iterative processes a repeated cycle that starts with a\ Diffusion models are iterative processes a repeated cycle that starts with a\
@ -91,7 +128,16 @@ quality.
'guidance': ''' 'guidance': '''
The guidance scale is a parameter that controls how much the image generation\ The guidance scale is a parameter that controls how much the image generation\
process follows the text prompt. The higher the value, the more image sticks\ process follows the text prompt. The higher the value, the more image sticks\
to a given text input. to a given text input. Value range 0 to 20. Recomended range: 4.5-7.5.
''',
'strength': '''
Noise is added to the image you use as an init image for img2img, and then the\
diffusion process continues according to the prompt. The amount of noise added\
depends on the \"Strength of img2img\"” parameter, which ranges from 0 to 1,\
where 0 adds no noise at all and you will get the exact image you added, and\
1 completely replaces the image with noise and almost acts as if you used\
normal txt2img instead of img2img.
''' '''
} }
@ -103,32 +149,26 @@ MP_ENABLED_ROLES = ['god']
MIN_STEP = 1 MIN_STEP = 1
MAX_STEP = 100 MAX_STEP = 100
MAX_WIDTH = 512 MAX_WIDTH = 1024
MAX_HEIGHT = 656 MAX_HEIGHT = 1024
MAX_GUIDANCE = 20 MAX_GUIDANCE = 20
DEFAULT_SEED = None DEFAULT_SEED = None
DEFAULT_WIDTH = 512 DEFAULT_WIDTH = 1024
DEFAULT_HEIGHT = 512 DEFAULT_HEIGHT = 1024
DEFAULT_GUIDANCE = 7.5 DEFAULT_GUIDANCE = 7.5
DEFAULT_STRENGTH = 0.5 DEFAULT_STRENGTH = 0.5
DEFAULT_STEP = 35 DEFAULT_STEP = 28
DEFAULT_CREDITS = 10 DEFAULT_CREDITS = 10
DEFAULT_ALGO = 'midj' DEFAULT_MODEL = list(MODELS.keys())[-1]
DEFAULT_ROLE = 'pleb' DEFAULT_ROLE = 'pleb'
DEFAULT_UPSCALER = None DEFAULT_UPSCALER = None
DEFAULT_CERTS_DIR = 'certs' DEFAULT_CONFIG_PATH = 'skynet.toml'
DEFAULT_CERT_WHITELIST_DIR = 'whitelist'
DEFAULT_CERT_SKYNET_PUB = 'brain.cert'
DEFAULT_CERT_SKYNET_PRIV = 'brain.key'
DEFAULT_CERT_DGPU = 'dgpu.key'
DEFAULT_RPC_ADDR = 'tcp://127.0.0.1:41000' DEFAULT_INITAL_MODELS = [
'stabilityai/stable-diffusion-xl-base-1.0'
DEFAULT_DGPU_ADDR = 'tcp://127.0.0.1:41069' ]
DEFAULT_DGPU_MAX_TASKS = 2
DEFAULT_INITAL_ALGOS = ['midj', 'stable', 'ink']
DATE_FORMAT = '%B the %dth %Y, %H:%M:%S' DATE_FORMAT = '%B the %dth %Y, %H:%M:%S'
@ -142,3 +182,14 @@ CONFIG_ATTRS = [
'strength', 'strength',
'upscaler' 'upscaler'
] ]
DEFAULT_EXPLORER_DOMAIN = 'explorer.skygpu.net'
DEFAULT_IPFS_DOMAIN = 'ipfs.skygpu.net'
DEFAULT_IPFS_REMOTE = '/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv'
DEFAULT_IPFS_LOCAL = 'http://127.0.0.1:5001'
TG_MAX_WIDTH = 1280
TG_MAX_HEIGHT = 1280
DEFAULT_SINGLE_CARD_MAP = 'cuda:0'

View File

@ -1,245 +0,0 @@
#!/usr/bin/python
import logging
from typing import Optional
from datetime import datetime
from contextlib import asynccontextmanager as acm
import trio
import triopg
import trio_asyncio
from asyncpg.exceptions import UndefinedColumnError
from .constants import *
DB_INIT_SQL = '''
CREATE SCHEMA IF NOT EXISTS skynet;
CREATE TABLE IF NOT EXISTS skynet.user(
id SERIAL PRIMARY KEY NOT NULL,
tg_id BIGINT,
wp_id VARCHAR(128),
mx_id VARCHAR(128),
ig_id VARCHAR(128),
generated INT NOT NULL,
joined DATE NOT NULL,
last_prompt TEXT,
role VARCHAR(128) NOT NULL
);
ALTER TABLE skynet.user
ADD CONSTRAINT tg_unique
UNIQUE (tg_id);
ALTER TABLE skynet.user
ADD CONSTRAINT wp_unique
UNIQUE (wp_id);
ALTER TABLE skynet.user
ADD CONSTRAINT mx_unique
UNIQUE (mx_id);
ALTER TABLE skynet.user
ADD CONSTRAINT ig_unique
UNIQUE (ig_id);
CREATE TABLE IF NOT EXISTS skynet.user_config(
id SERIAL NOT NULL,
algo VARCHAR(128) NOT NULL,
step INT NOT NULL,
width INT NOT NULL,
height INT NOT NULL,
seed BIGINT,
guidance REAL NOT NULL,
strength REAL NOT NULL,
upscaler VARCHAR(128)
);
ALTER TABLE skynet.user_config
ADD FOREIGN KEY(id)
REFERENCES skynet.user(id);
'''
def try_decode_uid(uid: str):
try:
return None, int(uid)
except ValueError:
...
try:
proto, uid = uid.split('+')
uid = int(uid)
return proto, uid
except ValueError:
logging.warning(f'got non chat proto uid?: {uid}')
return None, None
@acm
async def open_database_connection(
db_user: str = DB_USER,
db_pass: str = DB_PASS,
db_host: str = DB_HOST,
db_name: str = DB_NAME
):
async with trio_asyncio.open_loop() as loop:
async with triopg.create_pool(
dsn=f'postgres://{db_user}:{db_pass}@{db_host}/{db_name}'
) as pool_conn:
async with pool_conn.acquire() as conn:
res = await conn.execute(f'''
select distinct table_schema
from information_schema.tables
where table_schema = \'{db_name}\'
''')
if '1' in res:
logging.info('schema already in db, skipping init')
else:
await conn.execute(DB_INIT_SQL)
yield pool_conn
async def get_user(conn, uid: str):
if isinstance(uid, str):
proto, uid = try_decode_uid(uid)
match proto:
case 'tg':
stmt = await conn.prepare(
'SELECT * FROM skynet.user WHERE tg_id = $1')
user = await stmt.fetchval(uid)
case _:
user = None
return user
else: # asumme is our uid
stmt = await conn.prepare(
'SELECT * FROM skynet.user WHERE id = $1')
return await stmt.fetchval(uid)
async def get_user_config(conn, user: int):
stmt = await conn.prepare(
'SELECT * FROM skynet.user_config WHERE id = $1')
return (await stmt.fetch(user))[0]
async def get_last_prompt_of(conn, user: int):
stmt = await conn.prepare(
'SELECT last_prompt FROM skynet.user WHERE id = $1')
return await stmt.fetchval(user)
async def new_user(conn, uid: str):
if await get_user(conn, uid):
raise ValueError('User already present on db')
logging.info(f'new user! {uid}')
date = datetime.utcnow()
proto, pid = try_decode_uid(uid)
async with conn.transaction():
match proto:
case 'tg':
tg_id = pid
stmt = await conn.prepare('''
INSERT INTO skynet.user(
tg_id, generated, joined, last_prompt, role)
VALUES($1, $2, $3, $4, $5)
ON CONFLICT DO NOTHING
''')
await stmt.fetch(
tg_id, 0, date, None, DEFAULT_ROLE
)
new_uid = await get_user(conn, uid)
case None:
stmt = await conn.prepare('''
INSERT INTO skynet.user(
id, generated, joined, last_prompt, role)
VALUES($1, $2, $3, $4, $5)
ON CONFLICT DO NOTHING
''')
await stmt.fetch(
pid, 0, date, None, DEFAULT_ROLE
)
new_uid = pid
stmt = await conn.prepare('''
INSERT INTO skynet.user_config(
id, algo, step, width, height, seed, guidance, strength, upscaler)
VALUES($1, $2, $3, $4, $5, $6, $7, $8, $9)
ON CONFLICT DO NOTHING
''')
user = await stmt.fetch(
new_uid,
DEFAULT_ALGO,
DEFAULT_STEP,
DEFAULT_WIDTH,
DEFAULT_HEIGHT,
DEFAULT_SEED,
DEFAULT_GUIDANCE,
DEFAULT_STRENGTH,
DEFAULT_UPSCALER
)
return new_uid
async def get_or_create_user(conn, uid: str):
user = await get_user(conn, uid)
if not user:
user = await new_user(conn, uid)
return user
async def update_user(conn, user: int, attr: str, val):
stmt = await conn.prepare(f'''
UPDATE skynet.user
SET {attr} = $2
WHERE id = $1
''')
await stmt.fetch(user, val)
async def update_user_config(conn, user: int, attr: str, val):
stmt = await conn.prepare(f'''
UPDATE skynet.user_config
SET {attr} = $2
WHERE id = $1
''')
await stmt.fetch(user, val)
async def get_user_stats(conn, user: int):
stmt = await conn.prepare('''
SELECT generated,joined,role FROM skynet.user
WHERE id = $1
''')
records = await stmt.fetch(user)
assert len(records) == 1
record = records[0]
return record
async def update_user_stats(
conn,
user: int,
last_prompt: Optional[str] = None
):
stmt = await conn.prepare('''
UPDATE skynet.user
SET generated = generated + 1
WHERE id = $1
''')
await stmt.fetch(user)
if last_prompt:
await update_user(conn, user, 'last_prompt', last_prompt)

View File

@ -0,0 +1,3 @@
#!/usr/bin/python
from .functions import open_new_database, open_database_connection

View File

@ -0,0 +1,368 @@
#!/usr/bin/python
import time
import random
import string
import logging
import importlib
from datetime import datetime
from contextlib import contextmanager as cm
from contextlib import asynccontextmanager as acm
import docker
import asyncpg
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
from ..constants import *
DB_INIT_SQL = '''
CREATE SCHEMA IF NOT EXISTS skynet;
CREATE TABLE IF NOT EXISTS skynet.user(
id BIGSERIAL PRIMARY KEY NOT NULL,
generated INT NOT NULL,
joined TIMESTAMP NOT NULL,
last_method TEXT,
last_prompt TEXT,
last_file TEXT,
last_binary TEXT,
role VARCHAR(128) NOT NULL
);
CREATE TABLE IF NOT EXISTS skynet.user_config(
id BIGSERIAL NOT NULL,
model VARCHAR(512) NOT NULL,
step INT NOT NULL,
width INT NOT NULL,
height INT NOT NULL,
seed NUMERIC,
guidance DECIMAL NOT NULL,
strength DECIMAL NOT NULL,
upscaler VARCHAR(128),
autoconf BOOLEAN DEFAULT TRUE,
CONSTRAINT fk_config
FOREIGN KEY(id)
REFERENCES skynet.user(id)
);
CREATE TABLE IF NOT EXISTS skynet.user_requests(
id BIGSERIAL NOT NULL,
user_id BIGSERIAL NOT NULL,
sent TIMESTAMP NOT NULL,
status TEXT NOT NULL,
status_msg BIGSERIAL PRIMARY KEY NOT NULL,
CONSTRAINT fk_user_req
FOREIGN KEY(user_id)
REFERENCES skynet.user(id)
);
'''
def try_decode_uid(uid: str):
try:
return None, int(uid)
except ValueError:
...
try:
proto, uid = uid.split('+')
uid = int(uid)
return proto, uid
except ValueError:
logging.warning(f'got non chat proto uid?: {uid}')
return None, None
@cm
def open_new_database(cleanup=True):
rpassword = ''.join(
random.choice(string.ascii_lowercase)
for i in range(12))
password = ''.join(
random.choice(string.ascii_lowercase)
for i in range(12))
dclient = docker.from_env()
container = dclient.containers.run(
'postgres',
name='skynet-test-postgres',
ports={'5432/tcp': None},
environment={
'POSTGRES_PASSWORD': rpassword
},
detach=True,
# could remove this if we ant the dockers to be persistent.
# remove=True
)
try:
for log in container.logs(stream=True):
log = log.decode().rstrip()
logging.info(log)
if ('database system is ready to accept connections' in log or
'database system is shut down' in log):
break
# ip = container.attrs['NetworkSettings']['IPAddress']
container.reload()
port = container.ports['5432/tcp'][0]['HostPort']
host = f'localhost:{port}'
# why print the system is ready to accept connections when its not
# postgres? wtf
time.sleep(1)
logging.info('creating skynet db...')
conn = psycopg2.connect(
user='postgres',
password=rpassword,
host='localhost',
port=port
)
logging.info('connected...')
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
with conn.cursor() as cursor:
cursor.execute(
f'CREATE USER skynet WITH PASSWORD \'{password}\'')
cursor.execute(
f'CREATE DATABASE skynet')
cursor.execute(
f'GRANT ALL PRIVILEGES ON DATABASE skynet TO skynet')
conn.close()
logging.info('done.')
yield container, password, host
finally:
if container and cleanup:
container.stop()
@acm
async def open_database_connection(
db_user: str = 'skynet',
db_pass: str = 'password',
db_host: str = 'localhost:5432',
db_name: str = 'skynet'
):
db = importlib.import_module('skynet.db.functions')
pool = await asyncpg.create_pool(
dsn=f'postgres://{db_user}:{db_pass}@{db_host}/{db_name}')
async with pool.acquire() as conn:
res = await conn.execute(f'''
select distinct table_schema
from information_schema.tables
where table_schema = \'{db_name}\'
''')
if '1' in res:
logging.info('schema already in db, skipping init')
else:
await conn.execute(DB_INIT_SQL)
col_check = await conn.fetch(f'''
select column_name
from information_schema.columns
where table_name = 'user_config' and column_name = 'autoconf';
''')
if not col_check:
await conn.execute('alter table skynet.user_config add column autoconf boolean default true;')
async def _db_call(method: str, *args, **kwargs):
method = getattr(db, method)
async with pool.acquire() as conn:
return await method(conn, *args, **kwargs)
yield _db_call
async def get_user_config(conn, user: int):
stmt = await conn.prepare(
'SELECT * FROM skynet.user_config WHERE id = $1')
conf = await stmt.fetch(user)
if len(conf) == 1:
return conf[0]
else:
return None
async def get_user(conn, uid: int):
return await get_user_config(conn, uid)
async def get_last_method_of(conn, user: int):
stmt = await conn.prepare(
'SELECT last_method FROM skynet.user WHERE id = $1')
return await stmt.fetchval(user)
async def get_last_prompt_of(conn, user: int):
stmt = await conn.prepare(
'SELECT last_prompt FROM skynet.user WHERE id = $1')
return await stmt.fetchval(user)
async def get_last_file_of(conn, user: int):
stmt = await conn.prepare(
'SELECT last_file FROM skynet.user WHERE id = $1')
return await stmt.fetchval(user)
async def get_last_binary_of(conn, user: int):
stmt = await conn.prepare(
'SELECT last_binary FROM skynet.user WHERE id = $1')
return await stmt.fetchval(user)
async def get_user_request(conn, mid: int):
stmt = await conn.prepare(
'SELECT * FROM skynet.user_requests WHERE id = $1')
return await stmt.fetch(mid)
async def get_user_request_by_sid(conn, sid: int):
stmt = await conn.prepare(
'SELECT * FROM skynet.user_requests WHERE status_msg = $1')
return (await stmt.fetch(sid))[0]
async def new_user_request(
conn, user: int, mid: int,
status_msg: int,
status: str = 'started processing request...'
):
date = datetime.utcnow()
async with conn.transaction():
stmt = await conn.prepare('''
INSERT INTO skynet.user_requests(
id, user_id, sent, status, status_msg
)
VALUES($1, $2, $3, $4, $5)
''')
await stmt.fetch(mid, user, date, status, status_msg)
async def update_user_request(
conn, mid: int, status: str
):
stmt = await conn.prepare(f'''
UPDATE skynet.user_requests
SET status = $2
WHERE id = $1
''')
await stmt.fetch(mid, status)
async def update_user_request_by_sid(
conn, sid: int, status: str
):
stmt = await conn.prepare(f'''
UPDATE skynet.user_requests
SET status = $2
WHERE status_msg = $1
''')
await stmt.fetch(sid, status)
async def new_user(conn, uid: int):
if await get_user(conn, uid):
raise ValueError('User already present on db')
logging.info(f'new user! {uid}')
date = datetime.utcnow()
async with conn.transaction():
stmt = await conn.prepare('''
INSERT INTO skynet.user(
id, generated, joined,
last_method, last_prompt, last_file, last_binary,
role
)
VALUES($1, $2, $3, $4, $5, $6, $7, $8)
''')
await stmt.fetch(
uid, 0, date, 'txt2img', None, None, None, DEFAULT_ROLE
)
stmt = await conn.prepare('''
INSERT INTO skynet.user_config(
id, model, step, width, height, guidance, strength, upscaler)
VALUES($1, $2, $3, $4, $5, $6, $7, $8)
''')
resp = await stmt.fetch(
uid,
DEFAULT_MODEL,
DEFAULT_STEP,
DEFAULT_WIDTH,
DEFAULT_HEIGHT,
DEFAULT_GUIDANCE,
DEFAULT_STRENGTH,
DEFAULT_UPSCALER
)
async def get_or_create_user(conn, uid: str):
user = await get_user(conn, uid)
if not user:
await new_user(conn, uid)
user = await get_user(conn, uid)
return user
async def update_user(conn, user: int, attr: str, val):
stmt = await conn.prepare(f'''
UPDATE skynet.user
SET {attr} = $2
WHERE id = $1
''')
await stmt.fetch(user, val)
async def update_user_config(conn, user: int, attr: str, val):
stmt = await conn.prepare(f'''
UPDATE skynet.user_config
SET {attr} = $2
WHERE id = $1
''')
await stmt.fetch(user, val)
async def get_user_stats(conn, user: int):
stmt = await conn.prepare('''
SELECT generated,joined,role FROM skynet.user
WHERE id = $1
''')
records = await stmt.fetch(user)
assert len(records) == 1
record = records[0]
return record
async def increment_generated(conn, user: int):
stmt = await conn.prepare('''
UPDATE skynet.user
SET generated = generated + 1
WHERE id = $1
''')
await stmt.fetch(user)
async def update_user_stats(
conn,
user: int,
method: str,
last_prompt: str | None = None,
last_file: str | None = None,
last_binary: str | None = None
):
await update_user(conn, user, 'last_method', method)
if last_prompt:
await update_user(conn, user, 'last_prompt', last_prompt)
if last_file:
await update_user(conn, user, 'last_file', last_file)
if last_binary:
await update_user(conn, user, 'last_binary', last_binary)
logging.info((method, last_prompt, last_binary))

View File

@ -1,381 +0,0 @@
#!/usr/bin/python
import gc
import io
import trio
import json
import uuid
import time
import zlib
import random
import logging
import traceback
from PIL import Image
from typing import List, Optional
from pathlib import Path
from contextlib import ExitStack
import pynng
import torch
from pynng import TLSConfig
from OpenSSL.crypto import (
load_privatekey,
load_certificate,
FILETYPE_PEM
)
from diffusers import (
StableDiffusionPipeline,
StableDiffusionImg2ImgPipeline,
EulerAncestralDiscreteScheduler
)
from realesrgan import RealESRGANer
from basicsr.archs.rrdbnet_arch import RRDBNet
from diffusers.models import UNet2DConditionModel
from .utils import (
pipeline_for,
convert_from_cv2_to_image, convert_from_image_to_cv2
)
from .protobuf import *
from .frontend import open_skynet_rpc
from .constants import *
def init_upscaler(model_path: str = 'weights/RealESRGAN_x4plus.pth'):
return RealESRGANer(
scale=4,
model_path=model_path,
dni_weight=None,
model=RRDBNet(
num_in_ch=3,
num_out_ch=3,
num_feat=64,
num_block=23,
num_grow_ch=32,
scale=4
),
half=True
)
class DGPUComputeError(BaseException):
...
class ReconnectingBus:
def __init__(self, address: str, tls_config: Optional[TLSConfig]):
self.address = address
self.tls_config = tls_config
self._stack = ExitStack()
self._sock = None
self._closed = True
def connect(self):
self._sock = self._stack.enter_context(
pynng.Bus0(recv_max_size=0))
self._sock.tls_config = self.tls_config
self._sock.dial(self.address)
self._closed = False
async def arecv(self):
while True:
try:
return await self._sock.arecv()
except pynng.exceptions.Closed:
if self._closed:
raise
async def asend(self, msg):
while True:
try:
return await self._sock.asend(msg)
except pynng.exceptions.Closed:
if self._closed:
raise
def close(self):
self._stack.close()
self._stack = ExitStack()
self._closed = True
def reconnect(self):
self.close()
self.connect()
async def open_dgpu_node(
cert_name: str,
unique_id: str,
key_name: Optional[str],
rpc_address: str = DEFAULT_RPC_ADDR,
dgpu_address: str = DEFAULT_DGPU_ADDR,
initial_algos: Optional[List[str]] = None,
security: bool = True
):
logging.basicConfig(level=logging.INFO)
logging.info(f'starting dgpu node!')
name = uuid.uuid4()
logging.info(f'loading models...')
upscaler = init_upscaler()
initial_algos = (
initial_algos
if initial_algos else DEFAULT_INITAL_ALGOS
)
models = {}
for algo in initial_algos:
models[algo] = {
'pipe': pipeline_for(algo),
'generated': 0
}
logging.info(f'loaded {algo}.')
logging.info('memory summary:')
logging.info('\n' + torch.cuda.memory_summary())
async def gpu_compute_one(ireq: DiffusionParameters, image=None):
algo = ireq.algo + 'img' if image else ireq.algo
if algo not in models:
least_used = list(models.keys())[0]
for model in models:
if models[least_used]['generated'] > models[model]['generated']:
least_used = model
del models[least_used]
gc.collect()
models[algo] = {
'pipe': pipeline_for(ireq.algo, image=True if image else False),
'generated': 0
}
_params = {}
if ireq.image:
_params['image'] = image
_params['strength'] = ireq.strength
else:
_params['width'] = int(ireq.width)
_params['height'] = int(ireq.height)
try:
image = models[algo]['pipe'](
ireq.prompt,
**_params,
guidance_scale=ireq.guidance,
num_inference_steps=int(ireq.step),
generator=torch.Generator("cuda").manual_seed(ireq.seed)
).images[0]
if ireq.upscaler == 'x4':
logging.info(f'size: {len(image.tobytes())}')
logging.info('performing upscale...')
input_img = image.convert('RGB')
up_img, _ = upscaler.enhance(
convert_from_image_to_cv2(input_img), outscale=4)
image = convert_from_cv2_to_image(up_img)
logging.info('done')
img_byte_arr = io.BytesIO()
image.save(img_byte_arr, format='PNG')
raw_img = img_byte_arr.getvalue()
logging.info(f'final img size {len(raw_img)} bytes.')
return raw_img
except BaseException as e:
logging.error(e)
raise DGPUComputeError(str(e))
finally:
torch.cuda.empty_cache()
async with (
open_skynet_rpc(
unique_id,
rpc_address=rpc_address,
security=security,
cert_name=cert_name,
key_name=key_name
) as rpc_call,
trio.open_nursery() as n
):
tls_config = None
if security:
# load tls certs
if not key_name:
key_name = cert_name
certs_dir = Path(DEFAULT_CERTS_DIR).resolve()
skynet_cert_path = certs_dir / 'brain.cert'
tls_cert_path = certs_dir / f'{cert_name}.cert'
tls_key_path = certs_dir / f'{key_name}.key'
cert_name = tls_cert_path.stem
skynet_cert_data = skynet_cert_path.read_text()
skynet_cert = load_certificate(FILETYPE_PEM, skynet_cert_data)
tls_cert_data = tls_cert_path.read_text()
tls_key_data = tls_key_path.read_text()
tls_key = load_privatekey(FILETYPE_PEM, tls_key_data)
logging.info(f'skynet cert: {skynet_cert_path}')
logging.info(f'dgpu cert: {tls_cert_path}')
logging.info(f'dgpu key: {tls_key_path}')
dgpu_address = 'tls+' + dgpu_address
tls_config = TLSConfig(
TLSConfig.MODE_CLIENT,
own_key_string=tls_key_data,
own_cert_string=tls_cert_data,
ca_string=skynet_cert_data)
logging.info(f'connecting to {dgpu_address}')
dgpu_bus = ReconnectingBus(dgpu_address, tls_config)
dgpu_bus.connect()
last_msg = time.time()
async def connection_refresher(refresh_time: int = 120):
nonlocal last_msg
while True:
now = time.time()
last_msg_time_delta = now - last_msg
logging.info(f'time since last msg: {last_msg_time_delta}')
if last_msg_time_delta > refresh_time:
dgpu_bus.reconnect()
logging.info('reconnected!')
last_msg = now
await trio.sleep(refresh_time)
n.start_soon(connection_refresher)
res = await rpc_call('dgpu_online')
assert 'ok' in res.result
try:
while True:
msg = await dgpu_bus.arecv()
img = None
if b'BINEXT' in msg:
header, msg, img_raw = msg.split(b'%$%$')
logging.info(f'got img attachment of size {len(img_raw)}')
logging.info(img_raw[:10])
raw_img = zlib.decompress(img_raw)
logging.info(raw_img[:10])
img = Image.open(io.BytesIO(raw_img))
w, h = img.size
logging.info(f'user sent img of size {img.size}')
if w > 512 or h > 512:
img.thumbnail((512, 512))
logging.info(f'resized it to {img.size}')
req = DGPUBusMessage()
req.ParseFromString(msg)
last_msg = time.time()
if req.method == 'heartbeat':
rep = DGPUBusMessage(
rid=req.rid,
nid=unique_id,
method=req.method
)
rep.params.update({'time': int(time.time() * 1000)})
if security:
rep.auth.cert = cert_name
rep.auth.sig = sign_protobuf_msg(rep, tls_key)
await dgpu_bus.asend(rep.SerializeToString())
logging.info('heartbeat reply')
continue
if req.nid != unique_id:
logging.info(
f'witnessed msg {req.rid}, node involved: {req.nid}')
continue
if security:
verify_protobuf_msg(req, skynet_cert)
ack_resp = DGPUBusMessage(
rid=req.rid,
nid=req.nid
)
ack_resp.params.update({'ack': {}})
if security:
ack_resp.auth.cert = cert_name
ack_resp.auth.sig = sign_protobuf_msg(ack_resp, tls_key)
# send ack
await dgpu_bus.asend(ack_resp.SerializeToString())
logging.info(f'sent ack, processing {req.rid}...')
try:
img_req = DiffusionParameters(**req.params)
if not img_req.seed:
img_req.seed = random.randint(0, 2 ** 64)
img = await gpu_compute_one(img_req, image=img)
img_resp = DGPUBusMessage(
rid=req.rid,
nid=req.nid,
method='binary-reply'
)
img_resp.params.update({
'len': len(img),
'meta': img_req.to_dict()
})
except DGPUComputeError as e:
traceback.print_exception(type(e), e, e.__traceback__)
img_resp = DGPUBusMessage(
rid=req.rid,
nid=req.nid
)
img_resp.params.update({'error': str(e)})
if security:
img_resp.auth.cert = cert_name
img_resp.auth.sig = sign_protobuf_msg(img_resp, tls_key)
# send final image
logging.info('sending img back...')
raw_msg = img_resp.SerializeToString()
await dgpu_bus.asend(raw_msg)
logging.info(f'sent {len(raw_msg)} bytes.')
if img_resp.method == 'binary-reply':
await dgpu_bus.asend(zlib.compress(img))
logging.info(f'sent {len(img)} bytes.')
except KeyboardInterrupt:
logging.info('interrupt caught, stopping...')
n.cancel_scope.cancel()
dgpu_bus.close()
finally:
res = await rpc_call('dgpu_offline')
assert 'ok' in res.result

View File

@ -0,0 +1,30 @@
#!/usr/bin/python
import trio
from hypercorn.config import Config
from hypercorn.trio import serve
from skynet.dgpu.compute import SkynetMM
from skynet.dgpu.daemon import SkynetDGPUDaemon
from skynet.dgpu.network import SkynetGPUConnector
async def open_dgpu_node(config: dict):
conn = SkynetGPUConnector(config)
mm = SkynetMM(config)
daemon = SkynetDGPUDaemon(mm, conn, config)
api = None
if 'api_bind' in config:
api_conf = Config()
api_conf.bind = [config['api_bind']]
api = await daemon.generate_api()
async with trio.open_nursery() as n:
n.start_soon(daemon.snap_updater_task)
if api:
n.start_soon(serve, api, api_conf)
await daemon.serve_forever()

View File

@ -0,0 +1,210 @@
#!/usr/bin/python
# Skynet Memory Manager
import gc
import logging
from hashlib import sha256
import zipfile
from PIL import Image
from diffusers import DiffusionPipeline
import trio
import torch
from skynet.constants import DEFAULT_INITAL_MODELS, MODELS
from skynet.dgpu.errors import DGPUComputeError, DGPUInferenceCancelled
from skynet.utils import crop_image, convert_from_cv2_to_image, convert_from_image_to_cv2, convert_from_img_to_bytes, init_upscaler, pipeline_for
def prepare_params_for_diffuse(
params: dict,
input_type: str,
binary = None
):
_params = {}
if binary != None:
match input_type:
case 'png':
image = crop_image(
binary, params['width'], params['height'])
_params['image'] = image
_params['strength'] = float(params['strength'])
case 'none':
...
case _:
raise DGPUComputeError(f'Unknown input_type {input_type}')
else:
_params['width'] = int(params['width'])
_params['height'] = int(params['height'])
return (
params['prompt'],
float(params['guidance']),
int(params['step']),
torch.manual_seed(int(params['seed'])),
params['upscaler'] if 'upscaler' in params else None,
_params
)
class SkynetMM:
def __init__(self, config: dict):
self.upscaler = init_upscaler()
self.initial_models = (
config['initial_models']
if 'initial_models' in config else DEFAULT_INITAL_MODELS
)
self.cache_dir = None
if 'hf_home' in config:
self.cache_dir = config['hf_home']
self._models = {}
for model in self.initial_models:
self.load_model(model, False, force=True)
def log_debug_info(self):
logging.info('memory summary:')
logging.info('\n' + torch.cuda.memory_summary())
def is_model_loaded(self, model_name: str, image: bool):
for model_key, model_data in self._models.items():
if (model_key == model_name and
model_data['image'] == image):
return True
return False
def load_model(
self,
model_name: str,
image: bool,
force=False
):
logging.info(f'loading model {model_name}...')
if force or len(self._models.keys()) == 0:
pipe = pipeline_for(
model_name, image=image, cache_dir=self.cache_dir)
self._models[model_name] = {
'pipe': pipe,
'generated': 0,
'image': image
}
else:
least_used = list(self._models.keys())[0]
for model in self._models:
if self._models[
least_used]['generated'] > self._models[model]['generated']:
least_used = model
del self._models[least_used]
logging.info(f'swapping model {least_used} for {model_name}...')
gc.collect()
torch.cuda.empty_cache()
pipe = pipeline_for(
model_name, image=image, cache_dir=self.cache_dir)
self._models[model_name] = {
'pipe': pipe,
'generated': 0,
'image': image
}
logging.info(f'loaded model {model_name}')
return pipe
def get_model(self, model_name: str, image: bool) -> DiffusionPipeline:
if model_name not in MODELS:
raise DGPUComputeError(f'Unknown model {model_name}')
if not self.is_model_loaded(model_name, image):
pipe = self.load_model(model_name, image=image)
else:
pipe = self._models[model_name]['pipe']
return pipe
def compute_one(
self,
request_id: int,
method: str,
params: dict,
input_type: str = 'png',
binary: bytes | None = None
):
def maybe_cancel_work(step, *args, **kwargs):
if self._should_cancel:
should_raise = trio.from_thread.run(self._should_cancel, request_id)
if should_raise:
logging.warn(f'cancelling work at step {step}')
raise DGPUInferenceCancelled()
maybe_cancel_work(0)
output_type = 'png'
if 'output_type' in params:
output_type = params['output_type']
output = None
output_hash = None
try:
match method:
case 'diffuse':
arguments = prepare_params_for_diffuse(
params, input_type, binary=binary)
prompt, guidance, step, seed, upscaler, extra_params = arguments
model = self.get_model(params['model'], 'image' in extra_params)
output = model(
prompt,
guidance_scale=guidance,
num_inference_steps=step,
generator=seed,
callback=maybe_cancel_work,
callback_steps=1,
**extra_params
).images[0]
output_binary = b''
match output_type:
case 'png':
if upscaler == 'x4':
input_img = output.convert('RGB')
up_img, _ = self.upscaler.enhance(
convert_from_image_to_cv2(input_img), outscale=4)
output = convert_from_cv2_to_image(up_img)
output_binary = convert_from_img_to_bytes(output)
case _:
raise DGPUComputeError(f'Unsupported output type: {output_type}')
output_hash = sha256(output_binary).hexdigest()
case _:
raise DGPUComputeError('Unsupported compute method')
except BaseException as e:
logging.error(e)
raise DGPUComputeError(str(e))
finally:
torch.cuda.empty_cache()
return output_hash, output

View File

@ -0,0 +1,226 @@
#!/usr/bin/python
import json
import random
import logging
import time
import traceback
from hashlib import sha256
from datetime import datetime
from functools import partial
import trio
from quart import jsonify
from quart_trio import QuartTrio as Quart
from skynet.constants import MODELS, VERSION
from skynet.dgpu.errors import *
from skynet.dgpu.compute import SkynetMM
from skynet.dgpu.network import SkynetGPUConnector
def convert_reward_to_int(reward_str):
int_part, decimal_part = (
reward_str.split('.')[0],
reward_str.split('.')[1].split(' ')[0]
)
return int(int_part + decimal_part)
class SkynetDGPUDaemon:
def __init__(
self,
mm: SkynetMM,
conn: SkynetGPUConnector,
config: dict
):
self.mm = mm
self.conn = conn
self.auto_withdraw = (
config['auto_withdraw']
if 'auto_withdraw' in config else False
)
self.account = config['account']
self.non_compete = set()
if 'non_compete' in config:
self.non_compete = set(config['non_compete'])
self.model_whitelist = set()
if 'model_whitelist' in config:
self.model_whitelist = set(config['model_whitelist'])
self.model_blacklist = set()
if 'model_blacklist' in config:
self.model_blacklist = set(config['model_blacklist'])
self.backend = 'sync-on-thread'
if 'backend' in config:
self.backend = config['backend']
self._snap = {
'queue': [],
'requests': {},
'my_results': []
}
self._benchmark = []
self._last_benchmark = None
self._last_generation_ts = None
def _get_benchmark_speed(self) -> float:
if not self._last_benchmark:
return 0
start = self._last_benchmark[0]
end = self._last_benchmark[-1]
elapsed = end - start
its = len(self._last_benchmark)
speed = its / elapsed
logging.info(f'{elapsed} s total its: {its}, at {speed} it/s ')
return speed
async def should_cancel_work(self, request_id: int):
self._benchmark.append(time.time())
competitors = set([
status['worker']
for status in self._snap['requests'][request_id]
if status['worker'] != self.account
])
return bool(self.non_compete & competitors)
async def snap_updater_task(self):
while True:
self._snap = await self.conn.get_full_queue_snapshot()
await trio.sleep(1)
async def generate_api(self):
app = Quart(__name__)
@app.route('/')
async def health():
return jsonify(
account=self.account,
version=VERSION,
last_generation_ts=self._last_generation_ts,
last_generation_speed=self._get_benchmark_speed()
)
return app
async def serve_forever(self):
try:
while True:
if self.auto_withdraw:
await self.conn.maybe_withdraw_all()
queue = self._snap['queue']
random.shuffle(queue)
queue = sorted(
queue,
key=lambda req: convert_reward_to_int(req['reward']),
reverse=True
)
for req in queue:
rid = req['id']
# parse request
body = json.loads(req['body'])
model = body['params']['model']
# if model not known
if model not in MODELS:
logging.warning(f'Unknown model {model}')
continue
# if whitelist enabled and model not in it continue
if (len(self.model_whitelist) > 0 and
not model in self.model_whitelist):
continue
# if blacklist contains model skip
if model in self.model_blacklist:
continue
my_results = [res['id'] for res in self._snap['my_results']]
if rid not in my_results and rid in self._snap['requests']:
statuses = self._snap['requests'][rid]
if len(statuses) == 0:
binary, input_type = await self.conn.get_input_data(req['binary_data'])
hash_str = (
str(req['nonce'])
+
req['body']
+
req['binary_data']
)
logging.info(f'hashing: {hash_str}')
request_hash = sha256(hash_str.encode('utf-8')).hexdigest()
# TODO: validate request
# perform work
logging.info(f'working on {body}')
resp = await self.conn.begin_work(rid)
if 'code' in resp:
logging.info(f'probably being worked on already... skip.')
else:
try:
output_type = 'png'
if 'output_type' in body['params']:
output_type = body['params']['output_type']
output = None
output_hash = None
match self.backend:
case 'sync-on-thread':
self.mm._should_cancel = self.should_cancel_work
output_hash, output = await trio.to_thread.run_sync(
partial(
self.mm.compute_one,
rid,
body['method'], body['params'],
input_type=input_type,
binary=binary
)
)
case _:
raise DGPUComputeError(f'Unsupported backend {self.backend}')
self._last_generation_ts = datetime.now().isoformat()
self._last_benchmark = self._benchmark
self._benchmark = []
ipfs_hash = await self.conn.publish_on_ipfs(output, typ=output_type)
await self.conn.submit_work(rid, request_hash, output_hash, ipfs_hash)
except BaseException as e:
traceback.print_exc()
await self.conn.cancel_work(rid, str(e))
finally:
break
else:
logging.info(f'request {rid} already beign worked on, skip...')
await trio.sleep(1)
except KeyboardInterrupt:
...

View File

@ -0,0 +1,8 @@
#!/usr/bin/python
class DGPUComputeError(BaseException):
...
class DGPUInferenceCancelled(BaseException):
...

View File

@ -0,0 +1,312 @@
#!/usr/bin/python
import io
import json
import time
import logging
from pathlib import Path
from functools import partial
import asks
import trio
import anyio
from PIL import Image, UnidentifiedImageError
from leap.cleos import CLEOS
from leap.sugar import Checksum256, Name, asset_from_str
from skynet.constants import DEFAULT_IPFS_DOMAIN
from skynet.ipfs import AsyncIPFSHTTP, get_ipfs_file
from skynet.dgpu.errors import DGPUComputeError
REQUEST_UPDATE_TIME = 3
async def failable(fn: partial, ret_fail=None):
try:
return await fn()
except (
OSError,
json.JSONDecodeError,
asks.errors.RequestTimeout,
asks.errors.BadHttpResponse,
anyio.BrokenResourceError
):
return ret_fail
class SkynetGPUConnector:
def __init__(self, config: dict):
self.account = Name(config['account'])
self.permission = config['permission']
self.key = config['key']
self.node_url = config['node_url']
self.hyperion_url = config['hyperion_url']
self.cleos = CLEOS(
None, None, self.node_url, remote=self.node_url)
self.ipfs_gateway_url = None
if 'ipfs_gateway_url' in config:
self.ipfs_gateway_url = config['ipfs_gateway_url']
self.ipfs_url = config['ipfs_url']
self.ipfs_client = AsyncIPFSHTTP(self.ipfs_url)
self.ipfs_domain = DEFAULT_IPFS_DOMAIN
if 'ipfs_domain' in config:
self.ipfs_domain = config['ipfs_domain']
self._wip_requests = {}
# blockchain helpers
async def get_work_requests_last_hour(self):
logging.info('get_work_requests_last_hour')
return await failable(
partial(
self.cleos.aget_table,
'telos.gpu', 'telos.gpu', 'queue',
index_position=2,
key_type='i64',
lower_bound=int(time.time()) - 3600
), ret_fail=[])
async def get_status_by_request_id(self, request_id: int):
logging.info('get_status_by_request_id')
return await failable(
partial(
self.cleos.aget_table,
'telos.gpu', request_id, 'status'), ret_fail=[])
async def get_global_config(self):
logging.info('get_global_config')
rows = await failable(
partial(
self.cleos.aget_table,
'telos.gpu', 'telos.gpu', 'config'))
if rows:
return rows[0]
else:
return None
async def get_worker_balance(self):
logging.info('get_worker_balance')
rows = await failable(
partial(
self.cleos.aget_table,
'telos.gpu', 'telos.gpu', 'users',
index_position=1,
key_type='name',
lower_bound=self.account,
upper_bound=self.account
))
if rows:
return rows[0]['balance']
else:
return None
async def get_competitors_for_req(self, request_id: int) -> set:
competitors = [
status['worker']
for status in
(await self.get_status_by_request_id(request_id))
if status['worker'] != self.account
]
logging.info(f'competitors: {competitors}')
return set(competitors)
async def get_full_queue_snapshot(self):
snap = {
'requests': {},
'my_results': []
}
snap['queue'] = await self.get_work_requests_last_hour()
async def _run_and_save(d, key: str, fn, *args, **kwargs):
d[key] = await fn(*args, **kwargs)
async with trio.open_nursery() as n:
n.start_soon(_run_and_save, snap, 'my_results', self.find_my_results)
for req in snap['queue']:
n.start_soon(
_run_and_save, snap['requests'], req['id'], self.get_status_by_request_id, req['id'])
return snap
async def begin_work(self, request_id: int):
logging.info('begin_work')
return await failable(
partial(
self.cleos.a_push_action,
'telos.gpu',
'workbegin',
{
'worker': self.account,
'request_id': request_id,
'max_workers': 2
},
self.account, self.key,
permission=self.permission
)
)
async def cancel_work(self, request_id: int, reason: str):
logging.info('cancel_work')
return await failable(
partial(
self.cleos.a_push_action,
'telos.gpu',
'workcancel',
{
'worker': self.account,
'request_id': request_id,
'reason': reason
},
self.account, self.key,
permission=self.permission
)
)
async def maybe_withdraw_all(self):
logging.info('maybe_withdraw_all')
balance = await self.get_worker_balance()
if not balance:
return
balance_amount = float(balance.split(' ')[0])
if balance_amount > 0:
await failable(
partial(
self.cleos.a_push_action,
'telos.gpu',
'withdraw',
{
'user': self.account,
'quantity': asset_from_str(balance)
},
self.account, self.key,
permission=self.permission
)
)
async def find_my_results(self):
logging.info('find_my_results')
return await failable(
partial(
self.cleos.aget_table,
'telos.gpu', 'telos.gpu', 'results',
index_position=4,
key_type='name',
lower_bound=self.account,
upper_bound=self.account
)
)
async def submit_work(
self,
request_id: int,
request_hash: str,
result_hash: str,
ipfs_hash: str
):
logging.info('submit_work')
return await failable(
partial(
self.cleos.a_push_action,
'telos.gpu',
'submit',
{
'worker': self.account,
'request_id': request_id,
'request_hash': Checksum256(request_hash),
'result_hash': Checksum256(result_hash),
'ipfs_hash': ipfs_hash
},
self.account, self.key,
permission=self.permission
)
)
# IPFS helpers
async def publish_on_ipfs(self, raw, typ: str = 'png'):
Path('ipfs-staging').mkdir(exist_ok=True)
logging.info('publish_on_ipfs')
target_file = ''
match typ:
case 'png':
raw: Image
target_file = 'ipfs-staging/image.png'
raw.save(target_file)
case _:
raise ValueError(f'Unsupported output type: {typ}')
if self.ipfs_gateway_url:
# check peer connections, reconnect to skynet gateway if not
gateway_id = Path(self.ipfs_gateway_url).name
peers = await self.ipfs_client.peers()
if gateway_id not in [p['Peer'] for p in peers]:
await self.ipfs_client.connect(self.ipfs_gateway_url)
file_info = await self.ipfs_client.add(Path(target_file))
file_cid = file_info['Hash']
await self.ipfs_client.pin(file_cid)
return file_cid
async def get_input_data(self, ipfs_hash: str) -> tuple[bytes, str]:
input_type = 'none'
if ipfs_hash == '':
return b'', input_type
results = {}
ipfs_link = f'https://{self.ipfs_domain}/ipfs/{ipfs_hash}'
ipfs_link_legacy = ipfs_link + '/image.png'
async with trio.open_nursery() as n:
async def get_and_set_results(link: str):
res = await get_ipfs_file(link, timeout=1)
logging.info(f'got response from {link}')
if not res or res.status_code != 200:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
else:
try:
# attempt to decode as image
results[link] = Image.open(io.BytesIO(res.raw))
input_type = 'png'
n.cancel_scope.cancel()
except UnidentifiedImageError:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
n.start_soon(
get_and_set_results, ipfs_link)
n.start_soon(
get_and_set_results, ipfs_link_legacy)
input_data = None
if ipfs_link_legacy in results:
input_data = results[ipfs_link_legacy]
if ipfs_link in results:
input_data = results[ipfs_link]
if input_data == None:
raise DGPUComputeError('Couldn\'t gather input data from ipfs')
return input_data, input_type

View File

@ -1,27 +1,9 @@
#!/usr/bin/python #!/usr/bin/python
import json import random
from typing import Union, Optional
from pathlib import Path
from contextlib import asynccontextmanager as acm
import pynng
from pynng import TLSConfig
from OpenSSL.crypto import (
load_privatekey,
load_certificate,
FILETYPE_PEM
)
from google.protobuf.struct_pb2 import Struct
from ..constants import * from ..constants import *
from ..protobuf.auth import *
from ..protobuf.skynet_pb2 import SkynetRPCRequest, SkynetRPCResponse
class ConfigRequestFormatError(BaseException): class ConfigRequestFormatError(BaseException):
... ...
@ -35,79 +17,13 @@ class ConfigUnknownAlgorithm(BaseException):
class ConfigUnknownUpscaler(BaseException): class ConfigUnknownUpscaler(BaseException):
... ...
class ConfigUnknownAutoConfSetting(BaseException):
...
class ConfigSizeDivisionByEight(BaseException): class ConfigSizeDivisionByEight(BaseException):
... ...
@acm
async def open_skynet_rpc(
unique_id: str,
rpc_address: str = DEFAULT_RPC_ADDR,
security: bool = False,
cert_name: Optional[str] = None,
key_name: Optional[str] = None
):
tls_config = None
if security:
# load tls certs
if not key_name:
key_name = cert_name
certs_dir = Path(DEFAULT_CERTS_DIR).resolve()
skynet_cert_data = (certs_dir / 'brain.cert').read_text()
skynet_cert = load_certificate(FILETYPE_PEM, skynet_cert_data)
tls_cert_path = certs_dir / f'{cert_name}.cert'
tls_cert_data = tls_cert_path.read_text()
tls_cert = load_certificate(FILETYPE_PEM, tls_cert_data)
cert_name = tls_cert_path.stem
tls_key_data = (certs_dir / f'{key_name}.key').read_text()
tls_key = load_privatekey(FILETYPE_PEM, tls_key_data)
rpc_address = 'tls+' + rpc_address
tls_config = TLSConfig(
TLSConfig.MODE_CLIENT,
own_key_string=tls_key_data,
own_cert_string=tls_cert_data,
ca_string=skynet_cert_data)
with pynng.Req0(recv_max_size=0) as sock:
if security:
sock.tls_config = tls_config
sock.dial(rpc_address)
async def _rpc_call(
method: str,
params: dict = {},
uid: Optional[str] = None
):
req = SkynetRPCRequest()
req.uid = uid if uid else unique_id
req.method = method
req.params.update(params)
if security:
req.auth.cert = cert_name
req.auth.sig = sign_protobuf_msg(req, tls_key)
ctx = sock.new_context()
await ctx.asend(req.SerializeToString())
resp = SkynetRPCResponse()
resp.ParseFromString(await ctx.arecv())
ctx.close()
if security:
verify_protobuf_msg(resp, skynet_cert)
return resp
yield _rpc_call
def validate_user_config_request(req: str): def validate_user_config_request(req: str):
params = req.split(' ') params = req.split(' ')
@ -120,10 +36,14 @@ def validate_user_config_request(req: str):
attr = params[1] attr = params[1]
match attr: match attr:
case 'algo': case 'model' | 'algo':
attr = 'model'
val = params[2] val = params[2]
if val not in ALGOS: shorts = [model_info['short'] for model_info in MODELS.values()]
raise ConfigUnknownAlgorithm(f'no algo named {val}') if val not in shorts:
raise ConfigUnknownAlgorithm(f'no model named {val}')
val = get_model_by_shortname(val)
case 'step': case 'step':
val = int(params[2]) val = int(params[2])
@ -164,12 +84,48 @@ def validate_user_config_request(req: str):
raise ConfigUnknownUpscaler( raise ConfigUnknownUpscaler(
f'\"{val}\" is not a valid upscaler') f'\"{val}\" is not a valid upscaler')
case 'autoconf':
val = params[2]
if val == 'on':
val = True
elif val == 'off':
val = False
else:
raise ConfigUnknownAutoConfSetting(
f'\"{val}\" not a valid setting for autoconf')
case _: case _:
raise ConfigUnknownAttribute( raise ConfigUnknownAttribute(
f'\"{attr}\" not a configurable parameter') f'\"{attr}\" not a configurable parameter')
return attr, val, f'config updated! {attr} to {val}' display_val = val
if attr == 'seed':
if not val:
display_val = 'Random'
return attr, val, f'config updated! {attr} to {display_val}'
except ValueError: except ValueError:
raise ValueError(f'\"{val}\" is not a number silly') raise ValueError(f'\"{val}\" is not a number silly')
def perform_auto_conf(config: dict) -> dict:
model = config['model']
prefered_size_w = 512
prefered_size_h = 512
if 'xl' in model:
prefered_size_w = 1024
prefered_size_h = 1024
else:
prefered_size_w = 512
prefered_size_h = 512
config['step'] = random.randint(20, 35)
config['width'] = prefered_size_w
config['height'] = prefered_size_h
return config

View File

@ -0,0 +1,311 @@
#!/usr/bin/python
from json import JSONDecodeError
import random
import logging
import asyncio
from decimal import Decimal
from hashlib import sha256
from datetime import datetime
from contextlib import ExitStack, AsyncExitStack
from contextlib import asynccontextmanager as acm
from leap.cleos import CLEOS
from leap.sugar import Name, asset_from_str, collect_stdout
from leap.hyperion import HyperionAPI
# from telebot.types import InputMediaPhoto
import discord
import io
from skynet.db import open_database_connection
from skynet.ipfs import get_ipfs_file, AsyncIPFSHTTP
from skynet.constants import *
from . import *
from .bot import DiscordBot
from .utils import *
from .handlers import create_handler_context
from .ui import SkynetView
class SkynetDiscordFrontend:
def __init__(
self,
# token: str,
account: str,
permission: str,
node_url: str,
hyperion_url: str,
db_host: str,
db_user: str,
db_pass: str,
ipfs_url: str,
remote_ipfs_node: str,
key: str,
explorer_domain: str,
ipfs_domain: str
):
# self.token = token
self.account = account
self.permission = permission
self.node_url = node_url
self.hyperion_url = hyperion_url
self.db_host = db_host
self.db_user = db_user
self.db_pass = db_pass
self.ipfs_url = ipfs_url
self.remote_ipfs_node = remote_ipfs_node
self.key = key
self.explorer_domain = explorer_domain
self.ipfs_domain = ipfs_domain
self.bot = DiscordBot(self)
self.cleos = CLEOS(None, None, url=node_url, remote=node_url)
self.hyperion = HyperionAPI(hyperion_url)
self.ipfs_node = AsyncIPFSHTTP(ipfs_node)
self._exit_stack = ExitStack()
self._async_exit_stack = AsyncExitStack()
async def start(self):
if self.remote_ipfs_node:
await self.ipfs_node.connect(self.remote_ipfs_node)
self.db_call = await self._async_exit_stack.enter_async_context(
open_database_connection(
self.db_user, self.db_pass, self.db_host))
create_handler_context(self)
async def stop(self):
await self._async_exit_stack.aclose()
self._exit_stack.close()
@acm
async def open(self):
await self.start()
yield self
await self.stop()
# maybe do this?
# async def update_status_message(
# self, status_msg, new_text: str, **kwargs
# ):
# await self.db_call(
# 'update_user_request_by_sid', status_msg.id, new_text)
# return await self.bot.edit_message_text(
# new_text,
# chat_id=status_msg.chat.id,
# message_id=status_msg.id,
# **kwargs
# )
# async def append_status_message(
# self, status_msg, add_text: str, **kwargs
# ):
# request = await self.db_call('get_user_request_by_sid', status_msg.id)
# await self.update_status_message(
# status_msg,
# request['status'] + add_text,
# **kwargs
# )
async def work_request(
self,
user,
status_msg,
method: str,
params: dict,
ctx: discord.ext.commands.context.Context | discord.Message,
file_id: str | None = None,
binary_data: str = ''
) -> bool:
send = ctx.channel.send
if params['seed'] == None:
params['seed'] = random.randint(0, 0xFFFFFFFF)
sanitized_params = {}
for key, val in params.items():
if isinstance(val, Decimal):
val = str(val)
sanitized_params[key] = val
body = json.dumps({
'method': 'diffuse',
'params': sanitized_params
})
request_time = datetime.now().isoformat()
await status_msg.delete()
msg_text = f'processing a \'{method}\' request by {user.name}\n[{timestamp_pretty()}] *broadcasting transaction to chain...* '
embed = discord.Embed(
title='live updates',
description=msg_text,
color=discord.Color.blue())
message = await send(embed=embed)
reward = '20.0000 GPU'
res = await self.cleos.a_push_action(
'telos.gpu',
'enqueue',
{
'user': Name(self.account),
'request_body': body,
'binary_data': binary_data,
'reward': asset_from_str(reward),
'min_verification': 1
},
self.account, self.key, permission=self.permission
)
if 'code' in res or 'statusCode' in res:
logging.error(json.dumps(res, indent=4))
await self.bot.channel.send(
status_msg,
'skynet has suffered an internal error trying to fill this request')
return False
enqueue_tx_id = res['transaction_id']
enqueue_tx_link = f'[**Your request on Skynet Explorer**](https://{self.explorer_domain}/v2/explore/transaction/{enqueue_tx_id})'
msg_text += f'**broadcasted!** \n{enqueue_tx_link}\n[{timestamp_pretty()}] *workers are processing request...* '
embed = discord.Embed(
title='live updates',
description=msg_text,
color=discord.Color.blue())
await message.edit(embed=embed)
out = collect_stdout(res)
request_id, nonce = out.split(':')
request_hash = sha256(
(nonce + body + binary_data).encode('utf-8')).hexdigest().upper()
request_id = int(request_id)
logging.info(f'{request_id} enqueued.')
tx_hash = None
ipfs_hash = None
for i in range(60):
try:
submits = await self.hyperion.aget_actions(
account=self.account,
filter='telos.gpu:submit',
sort='desc',
after=request_time
)
actions = [
action
for action in submits['actions']
if action[
'act']['data']['request_hash'] == request_hash
]
if len(actions) > 0:
tx_hash = actions[0]['trx_id']
data = actions[0]['act']['data']
ipfs_hash = data['ipfs_hash']
worker = data['worker']
logging.info('Found matching submit!')
break
except JSONDecodeError:
logging.error(f'network error while getting actions, retry..')
await asyncio.sleep(1)
if not ipfs_hash:
timeout_text = f'\n[{timestamp_pretty()}] **timeout processing request**'
embed = discord.Embed(
title='live updates',
description=timeout_text,
color=discord.Color.blue())
await message.edit(embed=embed)
return False
tx_link = f'[**Your result on Skynet Explorer**](https://explorer.{DEFAULT_DOMAIN}/v2/explore/transaction/{tx_hash})'
msg_text += f'**request processed!**\n{tx_link}\n[{timestamp_pretty()}] *trying to download image...*\n '
embed = discord.Embed(
title='live updates',
description=msg_text,
color=discord.Color.blue())
await message.edit(embed=embed)
# attempt to get the image and send it
results = {}
ipfs_link = f'https://{self.ipfs_domain}/ipfs/{ipfs_hash}'
ipfs_link_legacy = ipfs_link + '/image.png'
async def get_and_set_results(link: str):
res = await get_ipfs_file(link)
logging.info(f'got response from {link}')
if not res or res.status_code != 200:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
else:
try:
with Image.open(io.BytesIO(res.raw)) as image:
tmp_buf = io.BytesIO()
image.save(tmp_buf, format='PNG')
png_img = tmp_buf.getvalue()
results[link] = png_img
except UnidentifiedImageError:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
tasks = [
get_and_set_results(ipfs_link),
get_and_set_results(ipfs_link_legacy)
]
await asyncio.gather(*tasks)
png_img = None
if ipfs_link_legacy in results:
png_img = results[ipfs_link_legacy]
if ipfs_link in results:
png_img = results[ipfs_link]
if not png_img:
await self.update_status_message(
status_msg,
caption,
reply_markup=build_redo_menu(),
parse_mode='HTML'
)
return True
# reword this function, may not need caption
caption, embed = generate_reply_caption(
user, params, tx_hash, worker, reward)
if not resp or resp.status_code != 200:
logging.error(f'couldn\'t get ipfs hosted image at {ipfs_link}!')
embed.add_field(name='Error', value=f'couldn\'t get ipfs hosted image [**here**]({ipfs_link})!')
await message.edit(embed=embed, view=SkynetView(self))
else:
logging.info(f'success! sending generated image')
await message.delete()
if file_id: # img2img
embed.set_thumbnail(
url='https://ipfs.skygpu.net/ipfs/' + binary_data + '/image.png')
embed.set_image(url=ipfs_link)
await send(embed=embed, view=SkynetView(self))
else: # txt2img
embed.set_image(url=ipfs_link)
await send(embed=embed, view=SkynetView(self))
return True

View File

@ -0,0 +1,89 @@
# import os
import discord
import asyncio
# from dotenv import load_dotenv
# from pathlib import Path
from discord.ext import commands
from .ui import SkynetView
# # Auth
# current_dir = Path(__file__).resolve().parent
# # parent_dir = current_dir.parent
# env_file_path = current_dir / ".env"
# load_dotenv(dotenv_path=env_file_path)
#
# discordToken = os.getenv("DISCORD_TOKEN")
# Actual Discord bot.
class DiscordBot(commands.Bot):
def __init__(self, bot, *args, **kwargs):
self.bot = bot
intents = discord.Intents(
messages=True,
guilds=True,
typing=True,
members=True,
presences=True,
reactions=True,
message_content=True,
voice_states=True
)
super().__init__(command_prefix='/', intents=intents, *args, **kwargs)
# async def setup_hook(self):
# db.poll_db.start()
async def on_ready(self):
print(f'{self.user.name} has connected to Discord!')
for guild in self.guilds:
for channel in guild.channels:
if channel.name == "skynet":
await channel.send('Skynet bot online', view=SkynetView(self.bot))
# intro_msg = await channel.send('Welcome to the Skynet discord bot.\nSkynet is a decentralized compute layer, focused on supporting AI paradigms. Skynet leverages blockchain technology to manage work requests and fills. We are currently featuring image generation and support 11 different models. Get started with the /help command, or just click on some buttons. Here is an example command to generate an image:\n/txt2img a big red tractor in a giant field of corn')
intro_msg = await channel.send("Welcome to Skynet's Discord Bot,\n\nSkynet operates as a decentralized compute layer, offering a wide array of support for diverse AI paradigms through the use of blockchain technology. Our present focus is image generation, powered by 11 distinct models.\n\nTo begin exploring, use the '/help' command or directly interact with the provided buttons. Here is an example command to generate an image:\n\n'/txt2img a big red tractor in a giant field of corn'")
await intro_msg.pin()
print("\n==============")
print("Logged in as")
print(self.user.name)
print(self.user.id)
print("==============")
async def on_message(self, message):
if isinstance(message.channel, discord.DMChannel):
return
elif message.channel.name != 'skynet':
return
elif message.author == self.user:
return
await self.process_commands(message)
# await asyncio.sleep(3)
# await message.channel.send('', view=SkynetView(self.bot))
async def on_command_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
await ctx.send('You missed a required argument, please try again.')
# async def on_message(self, message):
# print(f"message from {message.author} what he said {message.content}")
# await message.channel.send(message.content)
# bot=DiscordBot()
# @bot.command(name='config', help='Responds with the configuration')
# async def config(ctx):
# response = "This is the bot configuration" # Put your bot configuration here
# await ctx.send(response)
#
# @bot.command(name='helper', help='Responds with a help')
# async def helper(ctx):
# response = "This is help information" # Put your help response here
# await ctx.send(response)
#
# @bot.command(name='txt2img', help='Responds with an image')
# async def txt2img(ctx, *, arg):
# response = f"This is your prompt: {arg}"
# await ctx.send(response)
# bot.run(discordToken)

View File

@ -0,0 +1,603 @@
#!/usr/bin/python
import io
import json
import logging
from datetime import datetime, timedelta
from PIL import Image
# from telebot.types import CallbackQuery, Message
from skynet.frontend import validate_user_config_request
from skynet.constants import *
from .ui import SkynetView
def create_handler_context(frontend: 'SkynetDiscordFrontend'):
bot = frontend.bot
cleos = frontend.cleos
db_call = frontend.db_call
work_request = frontend.work_request
ipfs_node = frontend.ipfs_node
@bot.command(name='config', help='Responds with the configuration')
async def set_config(ctx):
user = ctx.author
try:
attr, val, reply_txt = validate_user_config_request(
ctx.message.content)
logging.info(f'user config update: {attr} to {val}')
await db_call('update_user_config', user.id, attr, val)
logging.info('done')
except BaseException as e:
reply_txt = str(e)
finally:
await ctx.reply(content=reply_txt, view=SkynetView(frontend))
bot.remove_command('help')
@bot.command(name='help', help='Responds with a help')
async def help(ctx):
splt_msg = ctx.message.content.split(' ')
if len(splt_msg) == 1:
await ctx.send(content=f'```{HELP_TEXT}```', view=SkynetView(frontend))
else:
param = splt_msg[1]
if param in HELP_TOPICS:
await ctx.send(content=f'```{HELP_TOPICS[param]}```', view=SkynetView(frontend))
else:
await ctx.send(content=f'```{HELP_UNKWNOWN_PARAM}```', view=SkynetView(frontend))
@bot.command(name='cool', help='Display a list of cool prompt words')
async def send_cool_words(ctx):
clean_cool_word = '\n'.join(CLEAN_COOL_WORDS)
await ctx.send(content=f'```{clean_cool_word}```', view=SkynetView(frontend))
@bot.command(name='stats', help='See user statistics' )
async def user_stats(ctx):
user = ctx.author
await db_call('get_or_create_user', user.id)
generated, joined, role = await db_call('get_user_stats', user.id)
stats_str = f'```generated: {generated}\n'
stats_str += f'joined: {joined}\n'
stats_str += f'role: {role}\n```'
await ctx.reply(stats_str, view=SkynetView(frontend))
@bot.command(name='donate', help='See donate info')
async def donation_info(ctx):
await ctx.reply(
f'```\n{DONATION_INFO}```', view=SkynetView(frontend))
@bot.command(name='txt2img', help='Responds with an image')
async def send_txt2img(ctx):
# grab user from ctx
user = ctx.author
user_row = await db_call('get_or_create_user', user.id)
# init new msg
init_msg = 'started processing txt2img request...'
status_msg = await ctx.send(init_msg)
await db_call(
'new_user_request', user.id, ctx.message.id, status_msg.id, status=init_msg)
prompt = ' '.join(ctx.message.content.split(' ')[1:])
if len(prompt) == 0:
await status_msg.edit(content=
'Empty text prompt ignored.'
)
await db_call('update_user_request', status_msg.id, 'Empty text prompt ignored.')
return
logging.info(f'mid: {ctx.message.id}')
user_config = {**user_row}
del user_config['id']
params = {
'prompt': prompt,
**user_config
}
await db_call(
'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
success = await work_request(user, status_msg, 'txt2img', params, ctx)
if success:
await db_call('increment_generated', user.id)
@bot.command(name='redo', help='Redo last request')
async def redo(ctx):
init_msg = 'started processing redo request...'
status_msg = await ctx.send(init_msg)
user = ctx.author
method = await db_call('get_last_method_of', user.id)
prompt = await db_call('get_last_prompt_of', user.id)
file_id = None
binary = ''
if method == 'img2img':
file_id = await db_call('get_last_file_of', user.id)
binary = await db_call('get_last_binary_of', user.id)
if not prompt:
await status_msg.edit(
content='no last prompt found, do a txt2img cmd first!',
view=SkynetView(frontend)
)
return
user_row = await db_call('get_or_create_user', user.id)
await db_call(
'new_user_request', user.id, ctx.message.id, status_msg.id, status=init_msg)
user_config = {**user_row}
del user_config['id']
params = {
'prompt': prompt,
**user_config
}
success = await work_request(
user, status_msg, 'redo', params, ctx,
file_id=file_id,
binary_data=binary
)
if success:
await db_call('increment_generated', user.id)
@bot.command(name='img2img', help='Responds with an image')
async def send_img2img(ctx):
# if isinstance(message_or_query, CallbackQuery):
# query = message_or_query
# message = query.message
# user = query.from_user
# chat = query.message.chat
#
# else:
# message = message_or_query
# user = message.from_user
# chat = message.chat
# reply_id = None
# if chat.type == 'group' and chat.id == GROUP_ID:
# reply_id = message.message_id
#
user = ctx.author
user_row = await db_call('get_or_create_user', user.id)
# init new msg
init_msg = 'started processing img2img request...'
status_msg = await ctx.send(init_msg)
await db_call(
'new_user_request', user.id, ctx.message.id, status_msg.id, status=init_msg)
if not ctx.message.content.startswith('/img2img'):
await ctx.reply(
'For image to image you need to add /img2img to the beggining of your caption'
)
return
prompt = ' '.join(ctx.message.content.split(' ')[1:])
if len(prompt) == 0:
await ctx.reply('Empty text prompt ignored.')
return
# file_id = message.photo[-1].file_id
# file_path = (await bot.get_file(file_id)).file_path
# image_raw = await bot.download_file(file_path)
#
file = ctx.message.attachments[-1]
file_id = str(file.id)
# file bytes
image_raw = await file.read()
with Image.open(io.BytesIO(image_raw)) as image:
w, h = image.size
if w > 512 or h > 512:
logging.warning(f'user sent img of size {image.size}')
image.thumbnail((512, 512))
logging.warning(f'resized it to {image.size}')
image_loc = 'ipfs-staging/image.png'
image.save(image_loc, format='PNG')
ipfs_info = await ipfs_node.add(image_loc)
ipfs_hash = ipfs_info['Hash']
await ipfs_node.pin(ipfs_hash)
logging.info(f'published input image {ipfs_hash} on ipfs')
logging.info(f'mid: {ctx.message.id}')
user_config = {**user_row}
del user_config['id']
params = {
'prompt': prompt,
**user_config
}
await db_call(
'update_user_stats',
user.id,
'img2img',
last_file=file_id,
last_prompt=prompt,
last_binary=ipfs_hash
)
success = await work_request(
user, status_msg, 'img2img', params, ctx,
file_id=file_id,
binary_data=ipfs_hash
)
if success:
await db_call('increment_generated', user.id)
# TODO: DELETE BELOW
# user = 'testworker3'
# status_msg = 'status'
# params = {
# 'prompt': arg,
# 'seed': None,
# 'step': 35,
# 'guidance': 7.5,
# 'strength': 0.5,
# 'width': 512,
# 'height': 512,
# 'upscaler': None,
# 'model': 'prompthero/openjourney',
# }
#
# ec = await work_request(user, status_msg, 'txt2img', params, ctx)
# print(ec)
# if ec == 0:
# await db_call('increment_generated', user.id)
# response = f"This is your prompt: {arg}"
# await ctx.send(response)
# generic / simple handlers
# @bot.message_handler(commands=['help'])
# async def send_help(message):
# splt_msg = message.text.split(' ')
#
# if len(splt_msg) == 1:
# await bot.reply_to(message, HELP_TEXT)
#
# else:
# param = splt_msg[1]
# if param in HELP_TOPICS:
# await bot.reply_to(message, HELP_TOPICS[param])
#
# else:
# await bot.reply_to(message, HELP_UNKWNOWN_PARAM)
#
# @bot.message_handler(commands=['cool'])
# async def send_cool_words(message):
# await bot.reply_to(message, '\n'.join(COOL_WORDS))
#
# @bot.message_handler(commands=['queue'])
# async def queue(message):
# an_hour_ago = datetime.now() - timedelta(hours=1)
# queue = await cleos.aget_table(
# 'telos.gpu', 'telos.gpu', 'queue',
# index_position=2,
# key_type='i64',
# sort='desc',
# lower_bound=int(an_hour_ago.timestamp())
# )
# await bot.reply_to(
# message, f'Total requests on skynet queue: {len(queue)}')
# @bot.message_handler(commands=['config'])
# async def set_config(message):
# user = message.from_user.id
# try:
# attr, val, reply_txt = validate_user_config_request(
# message.text)
#
# logging.info(f'user config update: {attr} to {val}')
# await db_call('update_user_config', user, attr, val)
# logging.info('done')
#
# except BaseException as e:
# reply_txt = str(e)
#
# finally:
# await bot.reply_to(message, reply_txt)
#
# @bot.message_handler(commands=['stats'])
# async def user_stats(message):
# user = message.from_user.id
#
# await db_call('get_or_create_user', user)
# generated, joined, role = await db_call('get_user_stats', user)
#
# stats_str = f'generated: {generated}\n'
# stats_str += f'joined: {joined}\n'
# stats_str += f'role: {role}\n'
#
# await bot.reply_to(
# message, stats_str)
#
# @bot.message_handler(commands=['donate'])
# async def donation_info(message):
# await bot.reply_to(
# message, DONATION_INFO)
#
# @bot.message_handler(commands=['say'])
# async def say(message):
# chat = message.chat
# user = message.from_user
#
# if (chat.type == 'group') or (user.id != 383385940):
# return
#
# await bot.send_message(GROUP_ID, message.text[4:])
# generic txt2img handler
# async def _generic_txt2img(message_or_query):
# if isinstance(message_or_query, CallbackQuery):
# query = message_or_query
# message = query.message
# user = query.from_user
# chat = query.message.chat
#
# else:
# message = message_or_query
# user = message.from_user
# chat = message.chat
#
# reply_id = None
# if chat.type == 'group' and chat.id == GROUP_ID:
# reply_id = message.message_id
#
# user_row = await db_call('get_or_create_user', user.id)
#
# # init new msg
# init_msg = 'started processing txt2img request...'
# status_msg = await bot.reply_to(message, init_msg)
# await db_call(
# 'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
#
# prompt = ' '.join(message.text.split(' ')[1:])
#
# if len(prompt) == 0:
# await bot.edit_message_text(
# 'Empty text prompt ignored.',
# chat_id=status_msg.chat.id,
# message_id=status_msg.id
# )
# await db_call('update_user_request', status_msg.id, 'Empty text prompt ignored.')
# return
#
# logging.info(f'mid: {message.id}')
#
# user_config = {**user_row}
# del user_config['id']
#
# params = {
# 'prompt': prompt,
# **user_config
# }
#
# await db_call(
# 'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
#
# ec = await work_request(user, status_msg, 'txt2img', params)
# if ec == 0:
# await db_call('increment_generated', user.id)
#
#
# # generic img2img handler
#
# async def _generic_img2img(message_or_query):
# if isinstance(message_or_query, CallbackQuery):
# query = message_or_query
# message = query.message
# user = query.from_user
# chat = query.message.chat
#
# else:
# message = message_or_query
# user = message.from_user
# chat = message.chat
#
# reply_id = None
# if chat.type == 'group' and chat.id == GROUP_ID:
# reply_id = message.message_id
#
# user_row = await db_call('get_or_create_user', user.id)
#
# # init new msg
# init_msg = 'started processing txt2img request...'
# status_msg = await bot.reply_to(message, init_msg)
# await db_call(
# 'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
#
# if not message.caption.startswith('/img2img'):
# await bot.reply_to(
# message,
# 'For image to image you need to add /img2img to the beggining of your caption'
# )
# return
#
# prompt = ' '.join(message.caption.split(' ')[1:])
#
# if len(prompt) == 0:
# await bot.reply_to(message, 'Empty text prompt ignored.')
# return
#
# file_id = message.photo[-1].file_id
# file_path = (await bot.get_file(file_id)).file_path
# image_raw = await bot.download_file(file_path)
# with Image.open(io.BytesIO(image_raw)) as image:
# w, h = image.size
#
# if w > 512 or h > 512:
# logging.warning(f'user sent img of size {image.size}')
# image.thumbnail((512, 512))
# logging.warning(f'resized it to {image.size}')
#
# image.save(f'ipfs-docker-staging/image.png', format='PNG')
#
# ipfs_hash = ipfs_node.add('image.png')
# ipfs_node.pin(ipfs_hash)
#
# logging.info(f'published input image {ipfs_hash} on ipfs')
#
# logging.info(f'mid: {message.id}')
#
# user_config = {**user_row}
# del user_config['id']
#
# params = {
# 'prompt': prompt,
# **user_config
# }
#
# await db_call(
# 'update_user_stats',
# user.id,
# 'img2img',
# last_file=file_id,
# last_prompt=prompt,
# last_binary=ipfs_hash
# )
#
# ec = await work_request(
# user, status_msg, 'img2img', params,
# file_id=file_id,
# binary_data=ipfs_hash
# )
#
# if ec == 0:
# await db_call('increment_generated', user.id)
#
# generic redo handler
# async def _redo(message_or_query):
# is_query = False
# if isinstance(message_or_query, CallbackQuery):
# is_query = True
# query = message_or_query
# message = query.message
# user = query.from_user
# chat = query.message.chat
#
# elif isinstance(message_or_query, Message):
# message = message_or_query
# user = message.from_user
# chat = message.chat
#
# init_msg = 'started processing redo request...'
# if is_query:
# status_msg = await bot.send_message(chat.id, init_msg)
#
# else:
# status_msg = await bot.reply_to(message, init_msg)
#
# method = await db_call('get_last_method_of', user.id)
# prompt = await db_call('get_last_prompt_of', user.id)
#
# file_id = None
# binary = ''
# if method == 'img2img':
# file_id = await db_call('get_last_file_of', user.id)
# binary = await db_call('get_last_binary_of', user.id)
#
# if not prompt:
# await bot.reply_to(
# message,
# 'no last prompt found, do a txt2img cmd first!'
# )
# return
#
#
# user_row = await db_call('get_or_create_user', user.id)
# await db_call(
# 'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
# user_config = {**user_row}
# del user_config['id']
#
# params = {
# 'prompt': prompt,
# **user_config
# }
#
# await work_request(
# user, status_msg, 'redo', params,
# file_id=file_id,
# binary_data=binary
# )
# "proxy" handlers just request routers
# @bot.message_handler(commands=['txt2img'])
# async def send_txt2img(message):
# await _generic_txt2img(message)
#
# @bot.message_handler(func=lambda message: True, content_types=[
# 'photo', 'document'])
# async def send_img2img(message):
# await _generic_img2img(message)
#
# @bot.message_handler(commands=['img2img'])
# async def img2img_missing_image(message):
# await bot.reply_to(
# message,
# 'seems you tried to do an img2img command without sending image'
# )
#
# @bot.message_handler(commands=['redo'])
# async def redo(message):
# await _redo(message)
#
# @bot.callback_query_handler(func=lambda call: True)
# async def callback_query(call):
# msg = json.loads(call.data)
# logging.info(call.data)
# method = msg.get('method')
# match method:
# case 'redo':
# await _redo(call)
# catch all handler for things we dont support
# @bot.message_handler(func=lambda message: True)
# async def echo_message(message):
# if message.text[0] == '/':
# await bot.reply_to(message, UNKNOWN_CMD_TEXT)

View File

@ -0,0 +1,311 @@
import io
import discord
from PIL import Image
import logging
from skynet.constants import *
from skynet.frontend import validate_user_config_request
class SkynetView(discord.ui.View):
def __init__(self, bot):
self.bot = bot
super().__init__(timeout=None)
self.add_item(RedoButton('redo', discord.ButtonStyle.primary, self.bot))
self.add_item(Txt2ImgButton('txt2img', discord.ButtonStyle.primary, self.bot))
self.add_item(Img2ImgButton('img2img', discord.ButtonStyle.primary, self.bot))
self.add_item(StatsButton('stats', discord.ButtonStyle.secondary, self.bot))
self.add_item(DonateButton('donate', discord.ButtonStyle.secondary, self.bot))
self.add_item(ConfigButton('config', discord.ButtonStyle.secondary, self.bot))
self.add_item(HelpButton('help', discord.ButtonStyle.secondary, self.bot))
self.add_item(CoolButton('cool', discord.ButtonStyle.secondary, self.bot))
class Txt2ImgButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
db_call = self.bot.db_call
work_request = self.bot.work_request
msg = await grab('Enter your prompt:', interaction)
# grab user from msg
user = msg.author
user_row = await db_call('get_or_create_user', user.id)
# init new msg
init_msg = 'started processing txt2img request...'
status_msg = await msg.channel.send(init_msg)
await db_call(
'new_user_request', user.id, msg.id, status_msg.id, status=init_msg)
prompt = msg.content
if len(prompt) == 0:
await status_msg.edit(content=
'Empty text prompt ignored.'
)
await db_call('update_user_request', status_msg.id, 'Empty text prompt ignored.')
return
logging.info(f'mid: {msg.id}')
user_config = {**user_row}
del user_config['id']
params = {
'prompt': prompt,
**user_config
}
await db_call(
'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
success = await work_request(user, status_msg, 'txt2img', params, msg)
if success:
await db_call('increment_generated', user.id)
class Img2ImgButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
db_call = self.bot.db_call
work_request = self.bot.work_request
ipfs_node = self.bot.ipfs_node
msg = await grab('Attach an Image. Enter your prompt:', interaction)
user = msg.author
user_row = await db_call('get_or_create_user', user.id)
# init new msg
init_msg = 'started processing img2img request...'
status_msg = await msg.channel.send(init_msg)
await db_call(
'new_user_request', user.id, msg.id, status_msg.id, status=init_msg)
# if not msg.content.startswith('/img2img'):
# await msg.reply(
# 'For image to image you need to add /img2img to the beggining of your caption'
# )
# return
prompt = msg.content
if len(prompt) == 0:
await msg.reply('Empty text prompt ignored.')
return
# file_id = message.photo[-1].file_id
# file_path = (await bot.get_file(file_id)).file_path
# image_raw = await bot.download_file(file_path)
#
file = msg.attachments[-1]
file_id = str(file.id)
# file bytes
image_raw = await file.read()
with Image.open(io.BytesIO(image_raw)) as image:
w, h = image.size
if w > 512 or h > 512:
logging.warning(f'user sent img of size {image.size}')
image.thumbnail((512, 512))
logging.warning(f'resized it to {image.size}')
image.save(f'ipfs-docker-staging/image.png', format='PNG')
ipfs_hash = ipfs_node.add('image.png')
ipfs_node.pin(ipfs_hash)
logging.info(f'published input image {ipfs_hash} on ipfs')
logging.info(f'mid: {msg.id}')
user_config = {**user_row}
del user_config['id']
params = {
'prompt': prompt,
**user_config
}
await db_call(
'update_user_stats',
user.id,
'img2img',
last_file=file_id,
last_prompt=prompt,
last_binary=ipfs_hash
)
success = await work_request(
user, status_msg, 'img2img', params, msg,
file_id=file_id,
binary_data=ipfs_hash
)
if success:
await db_call('increment_generated', user.id)
class RedoButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
db_call = self.bot.db_call
work_request = self.bot.work_request
init_msg = 'started processing redo request...'
await interaction.response.send_message(init_msg)
status_msg = await interaction.original_response()
user = interaction.user
method = await db_call('get_last_method_of', user.id)
prompt = await db_call('get_last_prompt_of', user.id)
file_id = None
binary = ''
if method == 'img2img':
file_id = await db_call('get_last_file_of', user.id)
binary = await db_call('get_last_binary_of', user.id)
if not prompt:
await status_msg.edit(
content='no last prompt found, do a txt2img cmd first!',
view=SkynetView(self.bot)
)
return
user_row = await db_call('get_or_create_user', user.id)
await db_call(
'new_user_request', user.id, interaction.id, status_msg.id, status=init_msg)
user_config = {**user_row}
del user_config['id']
params = {
'prompt': prompt,
**user_config
}
success = await work_request(
user, status_msg, 'redo', params, interaction,
file_id=file_id,
binary_data=binary
)
if success:
await db_call('increment_generated', user.id)
class ConfigButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
db_call = self.bot.db_call
msg = await grab('What params do you want to change? (format: <param> <value>)', interaction)
user = interaction.user
try:
attr, val, reply_txt = validate_user_config_request(
'/config ' + msg.content)
logging.info(f'user config update: {attr} to {val}')
await db_call('update_user_config', user.id, attr, val)
logging.info('done')
except BaseException as e:
reply_txt = str(e)
finally:
await msg.reply(content=reply_txt, view=SkynetView(self.bot))
class StatsButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
db_call = self.bot.db_call
user = interaction.user
await db_call('get_or_create_user', user.id)
generated, joined, role = await db_call('get_user_stats', user.id)
stats_str = f'```generated: {generated}\n'
stats_str += f'joined: {joined}\n'
stats_str += f'role: {role}\n```'
await interaction.response.send_message(
content=stats_str, view=SkynetView(self.bot))
class DonateButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
await interaction.response.send_message(
content=f'```\n{DONATION_INFO}```',
view=SkynetView(self.bot))
class CoolButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
clean_cool_word = '\n'.join(CLEAN_COOL_WORDS)
await interaction.response.send_message(
content=f'```{clean_cool_word}```',
view=SkynetView(self.bot))
class HelpButton(discord.ui.Button):
def __init__(self, label: str, style: discord.ButtonStyle, bot):
self.bot = bot
super().__init__(label=label, style=style)
async def callback(self, interaction):
msg = await grab('What would you like help with? (a for all)', interaction)
param = msg.content
if param == 'a':
await msg.reply(content=f'```{HELP_TEXT}```', view=SkynetView(self.bot))
else:
if param in HELP_TOPICS:
await msg.reply(content=f'```{HELP_TOPICS[param]}```', view=SkynetView(self.bot))
else:
await msg.reply(content=f'```{HELP_UNKWNOWN_PARAM}```', view=SkynetView(self.bot))
async def grab(prompt, interaction):
def vet(m):
return m.author == interaction.user and m.channel == interaction.channel
await interaction.response.send_message(prompt, ephemeral=True)
message = await interaction.client.wait_for('message', check=vet)
return message

View File

@ -0,0 +1,122 @@
#!/usr/bin/python
import json
import logging
import traceback
from datetime import datetime, timezone
from telebot.types import InlineKeyboardButton, InlineKeyboardMarkup
from telebot.async_telebot import ExceptionHandler
from telebot.formatting import hlink
import discord
from skynet.constants import *
def timestamp_pretty():
return datetime.now(timezone.utc).strftime('%H:%M:%S')
def tg_user_pretty(tguser):
if tguser.username:
return f'@{tguser.username}'
else:
return f'{tguser.first_name} id: {tguser.id}'
class SKYExceptionHandler(ExceptionHandler):
def handle(exception):
traceback.print_exc()
def build_redo_menu():
btn_redo = InlineKeyboardButton("Redo", callback_data=json.dumps({'method': 'redo'}))
inline_keyboard = InlineKeyboardMarkup()
inline_keyboard.add(btn_redo)
return inline_keyboard
def prepare_metainfo_caption(user, worker: str, reward: str, meta: dict, embed) -> str:
prompt = meta["prompt"]
if len(prompt) > 256:
prompt = prompt[:256]
gen_str = f'generated by {user.name}\n'
gen_str += f'performed by {worker}\n'
gen_str += f'reward: {reward}\n'
embed.add_field(
name='General Info', value=f'```{gen_str}```', inline=False)
# meta_str = f'__by {user.name}__\n'
# meta_str += f'*performed by {worker}*\n'
# meta_str += f'__**reward: {reward}**__\n'
embed.add_field(name='Prompt', value=f'```{prompt}\n```', inline=False)
# meta_str = f'`prompt:` {prompt}\n'
meta_str = f'seed: {meta["seed"]}\n'
meta_str += f'step: {meta["step"]}\n'
meta_str += f'guidance: {meta["guidance"]}\n'
if meta['strength']:
meta_str += f'strength: {meta["strength"]}\n'
meta_str += f'algo: {meta["model"]}\n'
if meta['upscaler']:
meta_str += f'upscaler: {meta["upscaler"]}\n'
embed.add_field(name='Parameters', value=f'```{meta_str}```', inline=False)
foot_str = f'Made with Skynet v{VERSION}\n'
foot_str += f'JOIN THE SWARM: https://discord.gg/JYM4YPMgK'
embed.set_footer(text=foot_str)
return meta_str
def generate_reply_caption(
user, # discord user
params: dict,
tx_hash: str,
worker: str,
reward: str,
explorer_domain: str
):
explorer_link = discord.Embed(
title='[SKYNET Transaction Explorer]',
url=f'https://{explorer_domain}/v2/explore/transaction/{tx_hash}',
color=discord.Color.blue())
meta_info = prepare_metainfo_caption(user, worker, reward, params, explorer_link)
# why do we have this?
final_msg = '\n'.join([
'Worker finished your task!',
# explorer_link,
f'PARAMETER INFO:\n{meta_info}'
])
final_msg = '\n'.join([
# f'***{explorer_link}***',
f'{meta_info}'
])
logging.info(final_msg)
return final_msg, explorer_link
async def get_global_config(cleos):
return (await cleos.aget_table(
'telos.gpu', 'telos.gpu', 'config'))[0]
async def get_user_nonce(cleos, user: str):
return (await cleos.aget_table(
'telos.gpu', 'telos.gpu', 'users',
index_position=1,
key_type='name',
lower_bound=user,
upper_bound=user
))[0]['nonce']

View File

@ -1,292 +0,0 @@
#!/usr/bin/python
import io
import zlib
import logging
from datetime import datetime
import pynng
from PIL import Image
from trio_asyncio import aio_as_trio
from telebot.types import (
InputFile, InputMediaPhoto, InlineKeyboardButton, InlineKeyboardMarkup
)
from telebot.async_telebot import AsyncTeleBot
from ..constants import *
from . import *
PREFIX = 'tg'
def build_redo_menu():
btn_redo = InlineKeyboardButton("Redo", callback_data=json.dumps({'method': 'redo'}))
inline_keyboard = InlineKeyboardMarkup()
inline_keyboard.add(btn_redo)
return inline_keyboard
def prepare_metainfo_caption(tguser, meta: dict) -> str:
prompt = meta["prompt"]
if len(prompt) > 256:
prompt = prompt[:256]
if tguser.username:
user = f'@{tguser.username}'
else:
user = f'{tguser.first_name} id: {tguser.id}'
meta_str = f'by {user}\n'
meta_str += f'prompt: \"{prompt}\"\n'
meta_str += f'seed: {meta["seed"]}\n'
meta_str += f'step: {meta["step"]}\n'
meta_str += f'guidance: {meta["guidance"]}\n'
if meta['strength']:
meta_str += f'strength: {meta["strength"]}\n'
meta_str += f'algo: \"{meta["algo"]}\"\n'
if meta['upscaler']:
meta_str += f'upscaler: \"{meta["upscaler"]}\"\n'
meta_str += f'sampler: k_euler_ancestral\n'
meta_str += f'skynet v{VERSION}'
return meta_str
async def run_skynet_telegram(
tg_token: str,
key_name: str = 'telegram-frontend',
cert_name: str = 'whitelist/telegram-frontend',
rpc_address: str = DEFAULT_RPC_ADDR
):
logging.basicConfig(level=logging.INFO)
bot = AsyncTeleBot(tg_token)
async with open_skynet_rpc(
'skynet-telegram-0',
rpc_address=rpc_address,
security=True,
cert_name=cert_name,
key_name=key_name
) as rpc_call:
async def _rpc_call(
uid: int,
method: str,
params: dict = {}
):
return await rpc_call(
method, params, uid=f'{PREFIX}+{uid}')
@bot.message_handler(commands=['help'])
async def send_help(message):
splt_msg = message.text.split(' ')
if len(splt_msg) == 1:
await bot.reply_to(message, HELP_TEXT)
else:
param = splt_msg[1]
if param in HELP_TOPICS:
await bot.reply_to(message, HELP_TOPICS[param])
else:
await bot.reply_to(message, HELP_UNKWNOWN_PARAM)
@bot.message_handler(commands=['cool'])
async def send_cool_words(message):
await bot.reply_to(message, '\n'.join(COOL_WORDS))
@bot.message_handler(commands=['txt2img'])
async def send_txt2img(message):
chat = message.chat
prompt = ' '.join(message.text.split(' ')[1:])
if len(prompt) == 0:
await bot.reply_to(message, 'Empty text prompt ignored.')
return
logging.info(f'mid: {message.id}')
resp = await _rpc_call(
message.from_user.id,
'txt2img',
{'prompt': prompt}
)
logging.info(f'resp to {message.id} arrived')
resp_txt = ''
result = MessageToDict(resp.result)
if 'error' in resp.result:
resp_txt = resp.result['message']
else:
logging.info(result['id'])
img_raw = zlib.decompress(bytes.fromhex(result['img']))
logging.info(f'got image of size: {len(img_raw)}')
img = Image.open(io.BytesIO(img_raw))
await bot.send_photo(
GROUP_ID,
caption=prepare_metainfo_caption(message.from_user, result['meta']['meta']),
photo=img,
reply_markup=build_redo_menu()
)
return
await bot.reply_to(message, resp_txt)
@bot.message_handler(func=lambda message: True, content_types=['photo'])
async def send_img2img(message):
chat = message.chat
if not message.caption.startswith('/img2img'):
return
prompt = ' '.join(message.caption.split(' ')[1:])
if len(prompt) == 0:
await bot.reply_to(message, 'Empty text prompt ignored.')
return
file_id = message.photo[-1].file_id
file_path = (await bot.get_file(file_id)).file_path
file_raw = await bot.download_file(file_path)
img = zlib.compress(file_raw)
logging.info(f'mid: {message.id}')
resp = await _rpc_call(
message.from_user.id,
'img2img',
{'prompt': prompt, 'img': img.hex()}
)
logging.info(f'resp to {message.id} arrived')
resp_txt = ''
result = MessageToDict(resp.result)
if 'error' in resp.result:
resp_txt = resp.result['message']
else:
logging.info(result['id'])
img_raw = zlib.decompress(bytes.fromhex(result['img']))
logging.info(f'got image of size: {len(img_raw)}')
img = Image.open(io.BytesIO(img_raw))
await bot.send_media_group(
GROUP_ID,
media=[
InputMediaPhoto(file_id),
InputMediaPhoto(
img,
caption=prepare_metainfo_caption(message.from_user, result['meta']['meta'])
)
]
)
return
await bot.reply_to(message, resp_txt)
@bot.message_handler(commands=['img2img'])
async def redo_txt2img(message):
await bot.reply_to(
message,
'seems you tried to do an img2img command without sending image'
)
async def _redo(message):
resp = await _rpc_call(message.from_user.id, 'redo')
resp_txt = ''
result = MessageToDict(resp.result)
if 'error' in resp.result:
resp_txt = resp.result['message']
else:
logging.info(result['id'])
img_raw = zlib.decompress(bytes.fromhex(result['img']))
logging.info(f'got image of size: {len(img_raw)}')
img = Image.open(io.BytesIO(img_raw))
await bot.send_photo(
GROUP_ID,
caption=prepare_metainfo_caption(message.from_user, result['meta']['meta']),
photo=img,
reply_markup=build_redo_menu()
)
return
await bot.reply_to(message, resp_txt)
@bot.message_handler(commands=['redo'])
async def redo_txt2img(message):
await _redo(message)
@bot.message_handler(commands=['config'])
async def set_config(message):
rpc_params = {}
try:
attr, val, reply_txt = validate_user_config_request(
message.text)
resp = await _rpc_call(
message.from_user.id,
'config', {'attr': attr, 'val': val})
except BaseException as e:
reply_txt = str(e)
finally:
await bot.reply_to(message, reply_txt)
@bot.message_handler(commands=['stats'])
async def user_stats(message):
resp = await _rpc_call(
message.from_user.id,
'stats',
{}
)
stats = resp.result
stats_str = f'generated: {stats["generated"]}\n'
stats_str += f'joined: {stats["joined"]}\n'
stats_str += f'role: {stats["role"]}\n'
await bot.reply_to(
message, stats_str)
@bot.message_handler(commands=['donate'])
async def donation_info(message):
await bot.reply_to(
message, DONATION_INFO)
@bot.message_handler(commands=['say'])
async def say(message):
chat = message.chat
user = message.from_user
if (chat.type == 'group') or (user.id != 383385940):
return
await bot.send_message(GROUP_ID, message.text[4:])
@bot.message_handler(func=lambda message: True)
async def echo_message(message):
if message.text[0] == '/':
await bot.reply_to(message, UNKNOWN_CMD_TEXT)
@bot.callback_query_handler(func=lambda call: True)
async def callback_query(call):
msg = json.loads(call.data)
logging.info(call.data)
method = msg.get('method')
match method:
case 'redo':
await _redo(call)
await aio_as_trio(bot.infinity_polling())

View File

@ -0,0 +1,319 @@
#!/usr/bin/python
import io
import random
import logging
import asyncio
from PIL import Image, UnidentifiedImageError
from json import JSONDecodeError
from decimal import Decimal
from hashlib import sha256
from datetime import datetime
from contextlib import AsyncExitStack
from contextlib import asynccontextmanager as acm
from leap.cleos import CLEOS
from leap.sugar import Name, asset_from_str, collect_stdout
from leap.hyperion import HyperionAPI
from telebot.types import InputMediaPhoto
from telebot.async_telebot import AsyncTeleBot
from skynet.db import open_database_connection
from skynet.ipfs import get_ipfs_file, AsyncIPFSHTTP
from skynet.constants import *
from . import *
from .utils import *
from .handlers import create_handler_context
class SkynetTelegramFrontend:
def __init__(
self,
token: str,
account: str,
permission: str,
node_url: str,
hyperion_url: str,
db_host: str,
db_user: str,
db_pass: str,
ipfs_node: str,
remote_ipfs_node: str | None,
key: str,
explorer_domain: str,
ipfs_domain: str
):
self.token = token
self.account = account
self.permission = permission
self.node_url = node_url
self.hyperion_url = hyperion_url
self.db_host = db_host
self.db_user = db_user
self.db_pass = db_pass
self.remote_ipfs_node = remote_ipfs_node
self.key = key
self.explorer_domain = explorer_domain
self.ipfs_domain = ipfs_domain
self.bot = AsyncTeleBot(token, exception_handler=SKYExceptionHandler)
self.cleos = CLEOS(None, None, url=node_url, remote=node_url)
self.hyperion = HyperionAPI(hyperion_url)
self.ipfs_node = AsyncIPFSHTTP(ipfs_node)
self._async_exit_stack = AsyncExitStack()
async def start(self):
if self.remote_ipfs_node:
await self.ipfs_node.connect(self.remote_ipfs_node)
self.db_call = await self._async_exit_stack.enter_async_context(
open_database_connection(
self.db_user, self.db_pass, self.db_host))
create_handler_context(self)
async def stop(self):
await self._async_exit_stack.aclose()
@acm
async def open(self):
await self.start()
yield self
await self.stop()
async def update_status_message(
self, status_msg, new_text: str, **kwargs
):
await self.db_call(
'update_user_request_by_sid', status_msg.id, new_text)
return await self.bot.edit_message_text(
new_text,
chat_id=status_msg.chat.id,
message_id=status_msg.id,
**kwargs
)
async def append_status_message(
self, status_msg, add_text: str, **kwargs
):
request = await self.db_call('get_user_request_by_sid', status_msg.id)
await self.update_status_message(
status_msg,
request['status'] + add_text,
**kwargs
)
async def work_request(
self,
user,
status_msg,
method: str,
params: dict,
file_id: str | None = None,
binary_data: str = ''
) -> bool:
if params['seed'] == None:
params['seed'] = random.randint(0, 0xFFFFFFFF)
sanitized_params = {}
for key, val in params.items():
if isinstance(val, Decimal):
val = str(val)
sanitized_params[key] = val
body = json.dumps({
'method': 'diffuse',
'params': sanitized_params
})
request_time = datetime.now().isoformat()
await self.update_status_message(
status_msg,
f'processing a \'{method}\' request by {tg_user_pretty(user)}\n'
f'[{timestamp_pretty()}] <i>broadcasting transaction to chain...</i>',
parse_mode='HTML'
)
reward = '20.0000 GPU'
res = await self.cleos.a_push_action(
'telos.gpu',
'enqueue',
{
'user': Name(self.account),
'request_body': body,
'binary_data': binary_data,
'reward': asset_from_str(reward),
'min_verification': 1
},
self.account, self.key, permission=self.permission
)
if 'code' in res or 'statusCode' in res:
logging.error(json.dumps(res, indent=4))
await self.update_status_message(
status_msg,
'skynet has suffered an internal error trying to fill this request')
return False
enqueue_tx_id = res['transaction_id']
enqueue_tx_link = hlink(
'Your request on Skynet Explorer',
f'https://{self.explorer_domain}/v2/explore/transaction/{enqueue_tx_id}'
)
await self.append_status_message(
status_msg,
f' <b>broadcasted!</b>\n'
f'<b>{enqueue_tx_link}</b>\n'
f'[{timestamp_pretty()}] <i>workers are processing request...</i>',
parse_mode='HTML'
)
out = collect_stdout(res)
request_id, nonce = out.split(':')
request_hash = sha256(
(nonce + body + binary_data).encode('utf-8')).hexdigest().upper()
request_id = int(request_id)
logging.info(f'{request_id} enqueued.')
tx_hash = None
ipfs_hash = None
for i in range(60):
try:
submits = await self.hyperion.aget_actions(
account=self.account,
filter='telos.gpu:submit',
sort='desc',
after=request_time
)
actions = [
action
for action in submits['actions']
if action[
'act']['data']['request_hash'] == request_hash
]
if len(actions) > 0:
tx_hash = actions[0]['trx_id']
data = actions[0]['act']['data']
ipfs_hash = data['ipfs_hash']
worker = data['worker']
logging.info('Found matching submit!')
break
except JSONDecodeError:
logging.error(f'network error while getting actions, retry..')
await asyncio.sleep(1)
if not ipfs_hash:
await self.update_status_message(
status_msg,
f'\n[{timestamp_pretty()}] <b>timeout processing request</b>',
parse_mode='HTML'
)
return False
tx_link = hlink(
'Your result on Skynet Explorer',
f'https://{self.explorer_domain}/v2/explore/transaction/{tx_hash}'
)
await self.append_status_message(
status_msg,
f' <b>request processed!</b>\n'
f'<b>{tx_link}</b>\n'
f'[{timestamp_pretty()}] <i>trying to download image...</i>\n',
parse_mode='HTML'
)
caption = generate_reply_caption(
user, params, tx_hash, worker, reward, self.explorer_domain)
# attempt to get the image and send it
results = {}
ipfs_link = f'https://{self.ipfs_domain}/ipfs/{ipfs_hash}'
ipfs_link_legacy = ipfs_link + '/image.png'
async def get_and_set_results(link: str):
res = await get_ipfs_file(link)
logging.info(f'got response from {link}')
if not res or res.status_code != 200:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
else:
try:
with Image.open(io.BytesIO(res.raw)) as image:
w, h = image.size
if w > TG_MAX_WIDTH or h > TG_MAX_HEIGHT:
logging.warning(f'result is of size {image.size}')
image.thumbnail((TG_MAX_WIDTH, TG_MAX_HEIGHT))
tmp_buf = io.BytesIO()
image.save(tmp_buf, format='PNG')
png_img = tmp_buf.getvalue()
results[link] = png_img
except UnidentifiedImageError:
logging.warning(f'couldn\'t get ipfs binary data at {link}!')
tasks = [
get_and_set_results(ipfs_link),
get_and_set_results(ipfs_link_legacy)
]
await asyncio.gather(*tasks)
png_img = None
if ipfs_link_legacy in results:
png_img = results[ipfs_link_legacy]
if ipfs_link in results:
png_img = results[ipfs_link]
if not png_img:
await self.update_status_message(
status_msg,
caption,
reply_markup=build_redo_menu(),
parse_mode='HTML'
)
return True
logging.info(f'success! sending generated image')
await self.bot.delete_message(
chat_id=status_msg.chat.id, message_id=status_msg.id)
if file_id: # img2img
await self.bot.send_media_group(
status_msg.chat.id,
media=[
InputMediaPhoto(file_id),
InputMediaPhoto(
png_img,
caption=caption,
parse_mode='HTML'
)
],
)
else: # txt2img
await self.bot.send_photo(
status_msg.chat.id,
caption=caption,
photo=png_img,
reply_markup=build_redo_menu(),
parse_mode='HTML'
)
return True

View File

@ -0,0 +1,367 @@
#!/usr/bin/python
import io
import json
import logging
from datetime import datetime, timedelta
from PIL import Image
from telebot.types import CallbackQuery, Message
from skynet.frontend import validate_user_config_request, perform_auto_conf
from skynet.constants import *
def create_handler_context(frontend: 'SkynetTelegramFrontend'):
bot = frontend.bot
cleos = frontend.cleos
db_call = frontend.db_call
work_request = frontend.work_request
ipfs_node = frontend.ipfs_node
# generic / simple handlers
@bot.message_handler(commands=['help'])
async def send_help(message):
splt_msg = message.text.split(' ')
if len(splt_msg) == 1:
await bot.reply_to(message, HELP_TEXT)
else:
param = splt_msg[1]
if param in HELP_TOPICS:
await bot.reply_to(message, HELP_TOPICS[param])
else:
await bot.reply_to(message, HELP_UNKWNOWN_PARAM)
@bot.message_handler(commands=['cool'])
async def send_cool_words(message):
await bot.reply_to(message, '\n'.join(COOL_WORDS))
@bot.message_handler(commands=['queue'])
async def queue(message):
an_hour_ago = datetime.now() - timedelta(hours=1)
queue = await cleos.aget_table(
'telos.gpu', 'telos.gpu', 'queue',
index_position=2,
key_type='i64',
sort='desc',
lower_bound=int(an_hour_ago.timestamp())
)
await bot.reply_to(
message, f'Total requests on skynet queue: {len(queue)}')
@bot.message_handler(commands=['config'])
async def set_config(message):
user = message.from_user.id
try:
attr, val, reply_txt = validate_user_config_request(
message.text)
logging.info(f'user config update: {attr} to {val}')
await db_call('update_user_config', user, attr, val)
logging.info('done')
except BaseException as e:
reply_txt = str(e)
finally:
await bot.reply_to(message, reply_txt)
@bot.message_handler(commands=['stats'])
async def user_stats(message):
user = message.from_user.id
await db_call('get_or_create_user', user)
generated, joined, role = await db_call('get_user_stats', user)
stats_str = f'generated: {generated}\n'
stats_str += f'joined: {joined}\n'
stats_str += f'role: {role}\n'
await bot.reply_to(
message, stats_str)
@bot.message_handler(commands=['donate'])
async def donation_info(message):
await bot.reply_to(
message, DONATION_INFO)
@bot.message_handler(commands=['say'])
async def say(message):
chat = message.chat
user = message.from_user
if (chat.type == 'group') or (user.id != 383385940):
return
await bot.send_message(GROUP_ID, message.text[4:])
# generic txt2img handler
async def _generic_txt2img(message_or_query):
if isinstance(message_or_query, CallbackQuery):
query = message_or_query
message = query.message
user = query.from_user
chat = query.message.chat
else:
message = message_or_query
user = message.from_user
chat = message.chat
if chat.type == 'private':
return
reply_id = None
if chat.type == 'group' and chat.id == GROUP_ID:
reply_id = message.message_id
user_row = await db_call('get_or_create_user', user.id)
# init new msg
init_msg = 'started processing txt2img request...'
status_msg = await bot.reply_to(message, init_msg)
await db_call(
'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
prompt = ' '.join(message.text.split(' ')[1:])
if len(prompt) == 0:
await bot.edit_message_text(
'Empty text prompt ignored.',
chat_id=status_msg.chat.id,
message_id=status_msg.id
)
await db_call('update_user_request', status_msg.id, 'Empty text prompt ignored.')
return
logging.info(f'mid: {message.id}')
user_config = {**user_row}
del user_config['id']
if user_config['autoconf']:
user_config = perform_auto_conf(user_config)
params = {
'prompt': prompt,
**user_config
}
await db_call(
'update_user_stats', user.id, 'txt2img', last_prompt=prompt)
success = await work_request(user, status_msg, 'txt2img', params)
if success:
await db_call('increment_generated', user.id)
# generic img2img handler
async def _generic_img2img(message_or_query):
if isinstance(message_or_query, CallbackQuery):
query = message_or_query
message = query.message
user = query.from_user
chat = query.message.chat
else:
message = message_or_query
user = message.from_user
chat = message.chat
if chat.type == 'private':
return
reply_id = None
if chat.type == 'group' and chat.id == GROUP_ID:
reply_id = message.message_id
user_row = await db_call('get_or_create_user', user.id)
# init new msg
init_msg = 'started processing txt2img request...'
status_msg = await bot.reply_to(message, init_msg)
await db_call(
'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
if not message.caption.startswith('/img2img'):
await bot.reply_to(
message,
'For image to image you need to add /img2img to the beggining of your caption'
)
return
prompt = ' '.join(message.caption.split(' ')[1:])
if len(prompt) == 0:
await bot.reply_to(message, 'Empty text prompt ignored.')
return
file_id = message.photo[-1].file_id
file_path = (await bot.get_file(file_id)).file_path
image_raw = await bot.download_file(file_path)
user_config = {**user_row}
del user_config['id']
if user_config['autoconf']:
user_config = perform_auto_conf(user_config)
with Image.open(io.BytesIO(image_raw)) as image:
w, h = image.size
if w > user_config['width'] or h > user_config['height']:
logging.warning(f'user sent img of size {image.size}')
image.thumbnail(
(user_config['width'], user_config['height']))
logging.warning(f'resized it to {image.size}')
image_loc = 'ipfs-staging/image.png'
image.save(image_loc, format='PNG')
ipfs_info = await ipfs_node.add(image_loc)
ipfs_hash = ipfs_info['Hash']
await ipfs_node.pin(ipfs_hash)
logging.info(f'published input image {ipfs_hash} on ipfs')
logging.info(f'mid: {message.id}')
params = {
'prompt': prompt,
**user_config
}
await db_call(
'update_user_stats',
user.id,
'img2img',
last_file=file_id,
last_prompt=prompt,
last_binary=ipfs_hash
)
success = await work_request(
user, status_msg, 'img2img', params,
file_id=file_id,
binary_data=ipfs_hash
)
if success:
await db_call('increment_generated', user.id)
# generic redo handler
async def _redo(message_or_query):
is_query = False
if isinstance(message_or_query, CallbackQuery):
is_query = True
query = message_or_query
message = query.message
user = query.from_user
chat = query.message.chat
elif isinstance(message_or_query, Message):
message = message_or_query
user = message.from_user
chat = message.chat
if chat.type == 'private':
return
init_msg = 'started processing redo request...'
if is_query:
status_msg = await bot.send_message(chat.id, init_msg)
else:
status_msg = await bot.reply_to(message, init_msg)
method = await db_call('get_last_method_of', user.id)
prompt = await db_call('get_last_prompt_of', user.id)
file_id = None
binary = ''
if method == 'img2img':
file_id = await db_call('get_last_file_of', user.id)
binary = await db_call('get_last_binary_of', user.id)
if not prompt:
await bot.reply_to(
message,
'no last prompt found, do a txt2img cmd first!'
)
return
user_row = await db_call('get_or_create_user', user.id)
await db_call(
'new_user_request', user.id, message.id, status_msg.id, status=init_msg)
user_config = {**user_row}
del user_config['id']
if user_config['autoconf']:
user_config = perform_auto_conf(user_config)
params = {
'prompt': prompt,
**user_config
}
success = await work_request(
user, status_msg, 'redo', params,
file_id=file_id,
binary_data=binary
)
if success:
await db_call('increment_generated', user.id)
# "proxy" handlers just request routers
@bot.message_handler(commands=['txt2img'])
async def send_txt2img(message):
await _generic_txt2img(message)
@bot.message_handler(func=lambda message: True, content_types=[
'photo', 'document'])
async def send_img2img(message):
await _generic_img2img(message)
@bot.message_handler(commands=['img2img'])
async def img2img_missing_image(message):
await bot.reply_to(
message,
'seems you tried to do an img2img command without sending image'
)
@bot.message_handler(commands=['redo'])
async def redo(message):
await _redo(message)
@bot.callback_query_handler(func=lambda call: True)
async def callback_query(call):
msg = json.loads(call.data)
logging.info(call.data)
method = msg.get('method')
match method:
case 'redo':
await _redo(call)
# catch all handler for things we dont support
@bot.message_handler(func=lambda message: True)
async def echo_message(message):
if message.text[0] == '/':
await bot.reply_to(message, UNKNOWN_CMD_TEXT)

View File

@ -0,0 +1,107 @@
#!/usr/bin/python
import json
import logging
import traceback
from datetime import datetime, timezone
from telebot.types import InlineKeyboardButton, InlineKeyboardMarkup
from telebot.async_telebot import ExceptionHandler
from telebot.formatting import hlink
from skynet.constants import *
def timestamp_pretty():
return datetime.now(timezone.utc).strftime('%H:%M:%S')
def tg_user_pretty(tguser):
if tguser.username:
return f'@{tguser.username}'
else:
return f'{tguser.first_name} id: {tguser.id}'
class SKYExceptionHandler(ExceptionHandler):
def handle(exception):
traceback.print_exc()
def build_redo_menu():
btn_redo = InlineKeyboardButton("Redo", callback_data=json.dumps({'method': 'redo'}))
inline_keyboard = InlineKeyboardMarkup()
inline_keyboard.add(btn_redo)
return inline_keyboard
def prepare_metainfo_caption(tguser, worker: str, reward: str, meta: dict) -> str:
prompt = meta["prompt"]
if len(prompt) > 256:
prompt = prompt[:256]
meta_str = f'<u>by {tg_user_pretty(tguser)}</u>\n'
meta_str += f'<i>performed by {worker}</i>\n'
meta_str += f'<b><u>reward: {reward}</u></b>\n'
meta_str += f'<code>prompt:</code> {prompt}\n'
meta_str += f'<code>seed: {meta["seed"]}</code>\n'
meta_str += f'<code>step: {meta["step"]}</code>\n'
meta_str += f'<code>guidance: {meta["guidance"]}</code>\n'
if meta['strength']:
meta_str += f'<code>strength: {meta["strength"]}</code>\n'
meta_str += f'<code>algo: {meta["model"]}</code>\n'
if meta['upscaler']:
meta_str += f'<code>upscaler: {meta["upscaler"]}</code>\n'
meta_str += f'<b><u>Made with Skynet v{VERSION}</u></b>\n'
meta_str += f'<b>JOIN THE SWARM: @skynetgpu</b>'
return meta_str
def generate_reply_caption(
tguser, # telegram user
params: dict,
tx_hash: str,
worker: str,
reward: str,
explorer_domain: str
):
explorer_link = hlink(
'SKYNET Transaction Explorer',
f'https://explorer.{explorer_domain}/v2/explore/transaction/{tx_hash}'
)
meta_info = prepare_metainfo_caption(tguser, worker, reward, params)
final_msg = '\n'.join([
'Worker finished your task!',
explorer_link,
f'PARAMETER INFO:\n{meta_info}'
])
final_msg = '\n'.join([
f'<b><i>{explorer_link}</i></b>',
f'{meta_info}'
])
logging.info(final_msg)
return final_msg
async def get_global_config(cleos):
return (await cleos.aget_table(
'telos.gpu', 'telos.gpu', 'config'))[0]
async def get_user_nonce(cleos, user: str):
return (await cleos.aget_table(
'telos.gpu', 'telos.gpu', 'users',
index_position=1,
key_type='name',
lower_bound=user,
upper_bound=user
))[0]['nonce']

View File

@ -0,0 +1,76 @@
#!/usr/bin/python
import logging
from pathlib import Path
import asks
class IPFSClientException(BaseException):
...
class AsyncIPFSHTTP:
def __init__(self, endpoint: str):
self.endpoint = endpoint
async def _post(self, sub_url: str, *args, **kwargs):
resp = await asks.post(
self.endpoint + sub_url,
*args, **kwargs
)
if resp.status_code != 200:
raise IPFSClientException(resp.text)
return resp.json()
async def add(self, file_path: Path, **kwargs):
files = {
'file': file_path
}
return await self._post(
'/api/v0/add',
files=files,
params=kwargs
)
async def pin(self, cid: str):
return (await self._post(
'/api/v0/pin/add',
params={'arg': cid}
))['Pins']
async def connect(self, multi_addr: str):
return await self._post(
'/api/v0/swarm/connect',
params={'arg': multi_addr}
)
async def peers(self, **kwargs):
return (await self._post(
'/api/v0/swarm/peers',
params=kwargs
))['Peers']
async def get_ipfs_file(ipfs_link: str, timeout: int = 60):
logging.info(f'attempting to get image at {ipfs_link}')
resp = None
for i in range(timeout):
try:
resp = await asks.get(ipfs_link, timeout=3)
except asks.errors.RequestTimeout:
logging.warning('timeout...')
except asks.errors.BadHttpResponse as e:
logging.error(f'ifps gateway exception: \n{e}')
if resp:
logging.info(f'status_code: {resp.status_code}')
else:
logging.error(f'timeout')
return resp

View File

@ -0,0 +1,69 @@
#!/usr/bin/python
import sys
import logging
from pathlib import Path
from contextlib import contextmanager as cm
import docker
from docker.types import Mount
@cm
def open_ipfs_node(
name: str = 'skynet-ipfs',
teardown: bool = False,
peers: list[str] = []
):
dclient = docker.from_env()
container = None
try:
container = dclient.containers.get(name)
except docker.errors.NotFound:
data_dir = Path().resolve() / 'ipfs-docker-data'
data_dir.mkdir(parents=True, exist_ok=True)
data_target = '/data/ipfs'
container = dclient.containers.run(
'ipfs/go-ipfs:latest',
name='skynet-ipfs',
ports={
'8080/tcp': 8080,
'4001/tcp': 4001,
'5001/tcp': ('127.0.0.1', 5001)
},
mounts=[
Mount(data_target, str(data_dir), 'bind')
],
detach=True,
remove=True
)
uid, gid = 1000, 1000
if sys.platform != 'win32':
ec, out = container.exec_run(['chown', f'{uid}:{gid}', '-R', data_target])
logging.info(out)
assert ec == 0
for log in container.logs(stream=True):
log = log.decode().rstrip()
logging.info(log)
if 'Daemon is ready' in log:
break
for peer in peers:
ec, out = container.exec_run(
['ipfs', 'swarm', 'connect', peer])
if ec != 0:
logging.error(out)
yield
if teardown and container:
container.stop()

View File

@ -0,0 +1,128 @@
#!/usr/bin/python
import logging
import traceback
from datetime import datetime, timedelta
import trio
from leap.hyperion import HyperionAPI
from . import AsyncIPFSHTTP
MAX_TIME = timedelta(seconds=20)
class SkynetPinner:
def __init__(
self,
hyperion: HyperionAPI,
ipfs_http: AsyncIPFSHTTP
):
self.hyperion = hyperion
self.ipfs_http = ipfs_http
self._pinned = {}
self._now = datetime.now()
def is_pinned(self, cid: str):
pin_time = self._pinned.get(cid)
return pin_time
def pin_cids(self, cids: list[str]):
for cid in cids:
self._pinned[cid] = self._now
def cleanup_old_cids(self):
cids = list(self._pinned.keys())
for cid in cids:
if (self._now - self._pinned[cid]) > MAX_TIME * 2:
del self._pinned[cid]
async def capture_enqueues(self, after: datetime):
enqueues = await self.hyperion.aget_actions(
account='telos.gpu',
filter='telos.gpu:enqueue',
sort='desc',
after=after.isoformat(),
limit=1000
)
logging.info(f'got {len(enqueues["actions"])} enqueue actions.')
cids = []
for action in enqueues['actions']:
cid = action['act']['data']['binary_data']
if cid and not self.is_pinned(cid):
cids.append(cid)
return cids
async def capture_submits(self, after: datetime):
submits = await self.hyperion.aget_actions(
account='telos.gpu',
filter='telos.gpu:submit',
sort='desc',
after=after.isoformat(),
limit=1000
)
logging.info(f'got {len(submits["actions"])} submits actions.')
cids = []
for action in submits['actions']:
cid = action['act']['data']['ipfs_hash']
if cid and not self.is_pinned(cid):
cids.append(cid)
return cids
async def task_pin(self, cid: str):
logging.info(f'pinning {cid}...')
for _ in range(6):
try:
with trio.move_on_after(5):
pins = await self.ipfs_http.pin(cid)
if cid not in pins:
logging.error(f'error pinning {cid}')
del self._pinned[cid]
else:
logging.info(f'pinned {cid}')
return
except trio.TooSlowError:
logging.error(f'timed out pinning {cid}')
logging.error(f'gave up pinning {cid}')
async def pin_forever(self):
async with trio.open_nursery() as n:
while True:
try:
self._now = datetime.now()
self.cleanup_old_cids()
prev_second = self._now - MAX_TIME
cids = [
*(await self.capture_enqueues(prev_second)),
*(await self.capture_submits(prev_second))
]
self.pin_cids(cids)
for cid in cids:
n.start_soon(self.task_pin, cid)
except OSError as e:
traceback.print_exc()
except KeyboardInterrupt:
break
await trio.sleep(1)

View File

@ -1,33 +0,0 @@
class ModelStore:
def __init__(
self,
max_models: int = 2
):
self.max_models = max_models
self._models = {}
def get(self, model_name: str):
if model_name in self._models:
return self._models[model_name]['pipe']
if len(self._models) == max_models:
least_used = list(self._models.keys())[0]
for model in self._models:
if self._models[least_used]['generated'] > self._models[model]['generated']:
least_used = model
del self._models[least_used]
gc.collect()
pipe = pipeline_for(model_name)
self._models[model_name] = {
'pipe': pipe,
'generated': 0
}
return pipe

145
skynet/nodeos.py 100644
View File

@ -0,0 +1,145 @@
#!/usr/bin/env python3
import json
import time
import logging
from contextlib import contextmanager as cm
import docker
from leap.cleos import CLEOS
from leap.sugar import get_container, Symbol
@cm
def open_nodeos(cleanup: bool = True):
dclient = docker.from_env()
vtestnet = get_container(
dclient,
'guilledk/skynet:leap-4.0.1',
name='skynet-nodeos',
force_unique=True,
detach=True,
network='host')
try:
cleos = CLEOS(
dclient, vtestnet,
url='http://127.0.0.1:42000',
remote='http://127.0.0.1:42000'
)
cleos.start_keosd()
priv, pub = cleos.create_key_pair()
logging.info(f'SUDO KEYS: {(priv, pub)}')
cleos.setup_wallet(priv)
genesis = json.dumps({
"initial_timestamp": '2017-08-29T02:14:00.000',
"initial_key": pub,
"initial_configuration": {
"max_block_net_usage": 1048576,
"target_block_net_usage_pct": 1000,
"max_transaction_net_usage": 1048575,
"base_per_transaction_net_usage": 12,
"net_usage_leeway": 500,
"context_free_discount_net_usage_num": 20,
"context_free_discount_net_usage_den": 100,
"max_block_cpu_usage": 200000,
"target_block_cpu_usage_pct": 1000,
"max_transaction_cpu_usage": 150000,
"min_transaction_cpu_usage": 100,
"max_transaction_lifetime": 3600,
"deferred_trx_expiration_window": 600,
"max_transaction_delay": 3888000,
"max_inline_action_size": 4096,
"max_inline_action_depth": 4,
"max_authority_depth": 6
}
}, indent=4)
ec, out = cleos.run(
['bash', '-c', f'echo \'{genesis}\' > /root/skynet.json'])
assert ec == 0
place_holder = 'EOS5fLreY5Zq5owBhmNJTgQaLqQ4ufzXSTpStQakEyfxNFuUEgNs1=KEY:5JnvSc6pewpHHuUHwvbJopsew6AKwiGnexwDRc2Pj2tbdw6iML9'
sig_provider = f'{pub}=KEY:{priv}'
nodeos_config_ini = '/root/nodeos/config.ini'
ec, out = cleos.run(
['bash', '-c', f'sed -i -e \'s/{place_holder}/{sig_provider}/g\' {nodeos_config_ini}'])
assert ec == 0
cleos.start_nodeos_from_config(
nodeos_config_ini,
data_dir='/root/nodeos/data',
genesis='/root/skynet.json',
state_plugin=True)
time.sleep(0.5)
cleos.wait_blocks(1)
cleos.boot_sequence(token_sym=Symbol('GPU', 4))
priv, pub = cleos.create_key_pair()
cleos.import_key(priv)
cleos.private_keys['telos.gpu'] = priv
logging.info(f'GPU KEYS: {(priv, pub)}')
cleos.new_account('telos.gpu', ram=4200000, key=pub)
for i in range(1, 4):
priv, pub = cleos.create_key_pair()
cleos.import_key(priv)
cleos.private_keys[f'testworker{i}'] = priv
logging.info(f'testworker{i} KEYS: {(priv, pub)}')
cleos.create_account_staked(
'eosio', f'testworker{i}', key=pub)
priv, pub = cleos.create_key_pair()
cleos.import_key(priv)
logging.info(f'TELEGRAM KEYS: {(priv, pub)}')
cleos.create_account_staked(
'eosio', 'telegram', ram=500000, key=pub)
cleos.transfer_token(
'eosio', 'telegram', '1000000.0000 GPU', 'Initial testing funds')
cleos.deploy_contract_from_host(
'telos.gpu',
'tests/contracts/telos.gpu',
verify_hash=False,
create_account=False
)
ec, out = cleos.push_action(
'telos.gpu',
'config',
['eosio.token', '4,GPU'],
f'telos.gpu@active'
)
assert ec == 0
ec, out = cleos.transfer_token(
'telegram', 'telos.gpu', '1000000.0000 GPU', 'Initial testing funds')
assert ec == 0
user_row = cleos.get_table(
'telos.gpu',
'telos.gpu',
'users',
index_position=1,
key_type='name',
lower_bound='telegram',
upper_bound='telegram'
)
assert len(user_row) == 1
yield cleos
finally:
# ec, out = cleos.list_all_keys()
# logging.info(out)
if cleanup:
vtestnet.stop()
vtestnet.remove()

View File

@ -1,29 +0,0 @@
#!/usr/bin/python
from typing import Optional
from dataclasses import dataclass, asdict
from google.protobuf.json_format import MessageToDict
from .auth import *
from .skynet_pb2 import *
class Struct:
def to_dict(self):
return asdict(self)
@dataclass
class DiffusionParameters(Struct):
algo: str
prompt: str
step: int
width: int
height: int
guidance: float
strength: float
seed: Optional[int]
image: bool # if true indicates a bytestream is next msg
upscaler: Optional[str]

View File

@ -1,65 +0,0 @@
#!/usr/bin/python
import json
import logging
from hashlib import sha256
from collections import OrderedDict
from google.protobuf.json_format import MessageToDict
from OpenSSL.crypto import PKey, X509, verify, sign
from .skynet_pb2 import *
def serialize_msg_deterministic(msg):
descriptors = sorted(
type(msg).DESCRIPTOR.fields_by_name.items(),
key=lambda x: x[0]
)
shasum = sha256()
def hash_dict(d):
data = [
(key, val)
for (key, val) in d.items()
]
for key, val in sorted(data, key=lambda x: x[0]):
if not isinstance(val, dict):
shasum.update(key.encode())
shasum.update(json.dumps(val).encode())
else:
hash_dict(val)
for (field_name, field_descriptor) in descriptors:
if not field_descriptor.message_type:
shasum.update(field_name.encode())
value = getattr(msg, field_name)
if isinstance(value, bytes):
value = value.hex()
shasum.update(json.dumps(value).encode())
continue
if field_descriptor.message_type.name == 'Struct':
hash_dict(MessageToDict(getattr(msg, field_name)))
deterministic_msg = shasum.hexdigest()
return deterministic_msg
def sign_protobuf_msg(msg, key: PKey):
return sign(
key, serialize_msg_deterministic(msg), 'sha256').hex()
def verify_protobuf_msg(msg, cert: X509):
return verify(
cert,
bytes.fromhex(msg.auth.sig),
serialize_msg_deterministic(msg),
'sha256'
)

View File

@ -1,30 +0,0 @@
syntax = "proto3";
package skynet;
import "google/protobuf/struct.proto";
message Auth {
string cert = 1;
string sig = 2;
}
message SkynetRPCRequest {
string uid = 1;
string method = 2;
google.protobuf.Struct params = 3;
optional Auth auth = 4;
}
message SkynetRPCResponse {
google.protobuf.Struct result = 1;
optional Auth auth = 2;
}
message DGPUBusMessage {
string rid = 1;
string nid = 2;
string method = 3;
google.protobuf.Struct params = 4;
optional Auth auth = 5;
}

View File

@ -1,32 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: skynet.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0cskynet.proto\x12\x06skynet\x1a\x1cgoogle/protobuf/struct.proto\"!\n\x04\x41uth\x12\x0c\n\x04\x63\x65rt\x18\x01 \x01(\t\x12\x0b\n\x03sig\x18\x02 \x01(\t\"\x82\x01\n\x10SkynetRPCRequest\x12\x0b\n\x03uid\x18\x01 \x01(\t\x12\x0e\n\x06method\x18\x02 \x01(\t\x12\'\n\x06params\x18\x03 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x1f\n\x04\x61uth\x18\x04 \x01(\x0b\x32\x0c.skynet.AuthH\x00\x88\x01\x01\x42\x07\n\x05_auth\"f\n\x11SkynetRPCResponse\x12\'\n\x06result\x18\x01 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x1f\n\x04\x61uth\x18\x02 \x01(\x0b\x32\x0c.skynet.AuthH\x00\x88\x01\x01\x42\x07\n\x05_auth\"\x8d\x01\n\x0e\x44GPUBusMessage\x12\x0b\n\x03rid\x18\x01 \x01(\t\x12\x0b\n\x03nid\x18\x02 \x01(\t\x12\x0e\n\x06method\x18\x03 \x01(\t\x12\'\n\x06params\x18\x04 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x1f\n\x04\x61uth\x18\x05 \x01(\x0b\x32\x0c.skynet.AuthH\x00\x88\x01\x01\x42\x07\n\x05_authb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'skynet_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_AUTH._serialized_start=54
_AUTH._serialized_end=87
_SKYNETRPCREQUEST._serialized_start=90
_SKYNETRPCREQUEST._serialized_end=220
_SKYNETRPCRESPONSE._serialized_start=222
_SKYNETRPCRESPONSE._serialized_end=324
_DGPUBUSMESSAGE._serialized_start=327
_DGPUBUSMESSAGE._serialized_end=468
# @@protoc_insertion_point(module_scope)

View File

@ -1,148 +0,0 @@
# piker: trading gear for hackers
# Copyright (C) Guillermo Rodriguez (in stewardship for piker0)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""
Built-in (extension) types.
"""
import sys
import json
from typing import Optional, Union
from pprint import pformat
import msgspec
class Struct(msgspec.Struct):
'''
A "human friendlier" (aka repl buddy) struct subtype.
'''
def to_dict(self) -> dict:
return {
f: getattr(self, f)
for f in self.__struct_fields__
}
def __repr__(self):
# only turn on pprint when we detect a python REPL
# at runtime B)
if (
hasattr(sys, 'ps1')
# TODO: check if we're in pdb
):
return self.pformat()
return super().__repr__()
def pformat(self) -> str:
return f'Struct({pformat(self.to_dict())})'
def copy(
self,
update: Optional[dict] = None,
) -> msgspec.Struct:
'''
Validate-typecast all self defined fields, return a copy of us
with all such fields.
This is kinda like the default behaviour in `pydantic.BaseModel`.
'''
if update:
for k, v in update.items():
setattr(self, k, v)
# roundtrip serialize to validate
return msgspec.msgpack.Decoder(
type=type(self)
).decode(
msgspec.msgpack.Encoder().encode(self)
)
def typecast(
self,
# fields: Optional[list[str]] = None,
) -> None:
for fname, ftype in self.__annotations__.items():
setattr(self, fname, ftype(getattr(self, fname)))
# proto
from OpenSSL.crypto import PKey, X509, verify, sign
class AuthenticatedStruct(Struct, kw_only=True):
cert: Optional[str] = None
sig: Optional[str] = None
def to_unsigned_dict(self) -> dict:
self_dict = self.to_dict()
if 'sig' in self_dict:
del self_dict['sig']
if 'cert' in self_dict:
del self_dict['cert']
return self_dict
def unsigned_to_bytes(self) -> bytes:
return json.dumps(
self.to_unsigned_dict()).encode()
def sign(self, key: PKey, cert: str):
self.cert = cert
self.sig = sign(
key, self.unsigned_to_bytes(), 'sha256').hex()
def verify(self, cert: X509):
if not self.sig:
raise ValueError('Tried to verify unsigned request')
return verify(
cert, bytes.fromhex(self.sig), self.unsigned_to_bytes(), 'sha256')
class SkynetRPCRequest(AuthenticatedStruct):
uid: Union[str, int] # user unique id
method: str # rpc method name
params: dict # variable params
class SkynetRPCResponse(AuthenticatedStruct):
result: dict
class ImageGenRequest(Struct):
prompt: str
step: int
width: int
height: int
guidance: int
seed: Optional[int]
algo: str
upscaler: Optional[str]
class DGPUBusRequest(AuthenticatedStruct):
rid: str # req id
nid: str # node id
task: str
params: dict
class DGPUBusResponse(AuthenticatedStruct):
rid: str # req id
nid: str # node id
params: dict

164
skynet/utils.py 100644 → 100755
View File

@ -1,9 +1,15 @@
#!/usr/bin/python #!/usr/bin/python
import io
import os
import sys
import time
import random import random
import logging
from typing import Optional from typing import Optional
from pathlib import Path from pathlib import Path
import asks
import torch import torch
import numpy as np import numpy as np
@ -11,14 +17,18 @@ import numpy as np
from PIL import Image from PIL import Image
from basicsr.archs.rrdbnet_arch import RRDBNet from basicsr.archs.rrdbnet_arch import RRDBNet
from diffusers import ( from diffusers import (
StableDiffusionPipeline, DiffusionPipeline,
StableDiffusionImg2ImgPipeline,
EulerAncestralDiscreteScheduler EulerAncestralDiscreteScheduler
) )
from realesrgan import RealESRGANer from realesrgan import RealESRGANer
from huggingface_hub import login from huggingface_hub import login
import trio
from .constants import ALGOS from .constants import MODELS
def time_ms():
return int(time.time() * 1000)
def convert_from_cv2_to_image(img: np.ndarray) -> Image: def convert_from_cv2_to_image(img: np.ndarray) -> Image:
@ -31,41 +41,96 @@ def convert_from_image_to_cv2(img: Image) -> np.ndarray:
return np.asarray(img) return np.asarray(img)
def pipeline_for(algo: str, mem_fraction: float = 1.0, image=False): def convert_from_bytes_to_img(raw: bytes) -> Image:
return Image.open(io.BytesIO(raw))
def convert_from_img_to_bytes(image: Image, fmt='PNG') -> bytes:
byte_arr = io.BytesIO()
image.save(byte_arr, format=fmt)
return byte_arr.getvalue()
def crop_image(image: Image, max_w: int, max_h: int) -> Image:
w, h = image.size
if w > max_w or h > max_h:
image.thumbnail((max_w, max_h))
return image.convert('RGB')
def pipeline_for(
model: str,
mem_fraction: float = 1.0,
image: bool = False,
cache_dir: str | None = None
) -> DiffusionPipeline:
assert torch.cuda.is_available() assert torch.cuda.is_available()
torch.cuda.empty_cache() torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(mem_fraction)
torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True
# full determinism
# https://huggingface.co/docs/diffusers/using-diffusers/reproducibility#deterministic-algorithms
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8"
torch.backends.cudnn.benchmark = False
torch.use_deterministic_algorithms(True)
model_info = MODELS[model]
req_mem = model_info['mem']
mem_gb = torch.cuda.mem_get_info()[1] / (10**9)
mem_gb *= mem_fraction
over_mem = mem_gb < req_mem
if over_mem:
logging.warn(f'model requires {req_mem} but card has {mem_gb}, model will run slower..')
shortname = model_info['short']
params = { params = {
'safety_checker': None,
'torch_dtype': torch.float16, 'torch_dtype': torch.float16,
'safety_checker': None 'cache_dir': cache_dir,
'variant': 'fp16'
} }
if algo == 'stable': match shortname:
case 'stable':
params['revision'] = 'fp16' params['revision'] = 'fp16'
if image: torch.cuda.set_per_process_memory_fraction(mem_fraction)
pipe_class = StableDiffusionImg2ImgPipeline
else:
pipe_class = StableDiffusionPipeline
pipe = pipe_class.from_pretrained( pipe = DiffusionPipeline.from_pretrained(
ALGOS[algo], **params) model, **params)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config( pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(
pipe.scheduler.config) pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
if over_mem:
if not image: if not image:
pipe.enable_vae_slicing() pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
return pipe.to('cuda') pipe.enable_model_cpu_offload()
else:
if sys.version_info[1] < 11:
# torch.compile only supported on python < 3.11
pipe.unet = torch.compile(
pipe.unet, mode='reduce-overhead', fullgraph=True)
pipe = pipe.to('cuda')
return pipe
def txt2img( def txt2img(
hf_token: str, hf_token: str,
model: str = 'midj', model: str = 'prompthero/openjourney',
prompt: str = 'a red old tractor in a sunny wheat field', prompt: str = 'a red old tractor in a sunny wheat field',
output: str = 'output.png', output: str = 'output.png',
width: int = 512, height: int = 512, width: int = 512, height: int = 512,
@ -73,12 +138,6 @@ def txt2img(
steps: int = 28, steps: int = 28,
seed: Optional[int] = None seed: Optional[int] = None
): ):
assert torch.cuda.is_available()
torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(1.0)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
login(token=hf_token) login(token=hf_token)
pipe = pipeline_for(model) pipe = pipeline_for(model)
@ -97,7 +156,7 @@ def txt2img(
def img2img( def img2img(
hf_token: str, hf_token: str,
model: str = 'midj', model: str = 'prompthero/openjourney',
prompt: str = 'a red old tractor in a sunny wheat field', prompt: str = 'a red old tractor in a sunny wheat field',
img_path: str = 'input.png', img_path: str = 'input.png',
output: str = 'output.png', output: str = 'output.png',
@ -106,16 +165,11 @@ def img2img(
steps: int = 28, steps: int = 28,
seed: Optional[int] = None seed: Optional[int] = None
): ):
assert torch.cuda.is_available()
torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(1.0)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
login(token=hf_token) login(token=hf_token)
pipe = pipeline_for(model, image=True) pipe = pipeline_for(model, image=True)
input_img = Image.open(img_path).convert('RGB') with open(img_path, 'rb') as img_file:
input_img = convert_from_bytes_and_crop(img_file.read(), 512, 512)
seed = seed if seed else random.randint(0, 2 ** 64) seed = seed if seed else random.randint(0, 2 ** 64)
prompt = prompt prompt = prompt
@ -130,20 +184,8 @@ def img2img(
image.save(output) image.save(output)
def upscale( def init_upscaler(model_path: str = 'weights/RealESRGAN_x4plus.pth'):
img_path: str = 'input.png', return RealESRGANer(
output: str = 'output.png',
model_path: str = 'weights/RealESRGAN_x4plus.pth'
):
assert torch.cuda.is_available()
torch.cuda.empty_cache()
torch.cuda.set_per_process_memory_fraction(1.0)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
input_img = Image.open(img_path).convert('RGB')
upscaler = RealESRGANer(
scale=4, scale=4,
model_path=model_path, model_path=model_path,
dni_weight=None, dni_weight=None,
@ -155,12 +197,42 @@ def upscale(
num_grow_ch=32, num_grow_ch=32,
scale=4 scale=4
), ),
half=True) half=True
)
def upscale(
img_path: str = 'input.png',
output: str = 'output.png',
model_path: str = 'weights/RealESRGAN_x4plus.pth'
):
input_img = Image.open(img_path).convert('RGB')
upscaler = init_upscaler(model_path=model_path)
up_img, _ = upscaler.enhance( up_img, _ = upscaler.enhance(
convert_from_image_to_cv2(input_img), outscale=4) convert_from_image_to_cv2(input_img), outscale=4)
image = convert_from_cv2_to_image(up_img) image = convert_from_cv2_to_image(up_img)
image.save(output) image.save(output)
async def download_upscaler():
print('downloading upscaler...')
weights_path = Path('weights')
weights_path.mkdir(exist_ok=True)
upscaler_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
save_path = weights_path / 'RealESRGAN_x4plus.pth'
response = await asks.get(upscaler_url)
with open(save_path, 'wb') as f:
f.write(response.content)
print('done')
def download_all_models(hf_token: str, hf_home: str):
assert torch.cuda.is_available()
trio.run(download_upscaler)
login(token=hf_token)
for model in MODELS:
print(f'DOWNLOADING {model.upper()}')
pipeline_for(model, cache_dir=hf_home)

View File

@ -1,129 +1,24 @@
#!/usr/bin/python #!/usr/bin/python
import os
import json
import time
import random
import string
import logging
from functools import partial
from pathlib import Path
import trio
import pytest import pytest
import psycopg2
import trio_asyncio
from docker.types import Mount, DeviceRequest from skynet.db import open_new_database
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT from skynet.ipfs import AsyncIPFSHTTP
from skynet.ipfs.docker import open_ipfs_node
from skynet.constants import * from skynet.nodeos import open_nodeos
from skynet.brain import run_skynet
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def postgres_db(dockerctl): def ipfs_client():
rpassword = ''.join( with open_ipfs_node(teardown=True):
random.choice(string.ascii_lowercase) yield AsyncIPFSHTTP('http://127.0.0.1:5001')
for i in range(12))
password = ''.join(
random.choice(string.ascii_lowercase)
for i in range(12))
with dockerctl.run( @pytest.fixture(scope='session')
'postgres', def postgres_db():
name='skynet-test-postgres', with open_new_database() as db_params:
ports={'5432/tcp': None}, yield db_params
environment={
'POSTGRES_PASSWORD': rpassword
}
) as containers:
container = containers[0]
# ip = container.attrs['NetworkSettings']['IPAddress']
port = container.ports['5432/tcp'][0]['HostPort']
host = f'localhost:{port}'
for log in container.logs(stream=True): @pytest.fixture(scope='session')
log = log.decode().rstrip() def cleos():
logging.info(log) with open_nodeos() as cli:
if ('database system is ready to accept connections' in log or yield cli
'database system is shut down' in log):
break
# why print the system is ready to accept connections when its not
# postgres? wtf
time.sleep(1)
logging.info('creating skynet db...')
conn = psycopg2.connect(
user='postgres',
password=rpassword,
host='localhost',
port=port
)
logging.info('connected...')
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
with conn.cursor() as cursor:
cursor.execute(
f'CREATE USER {DB_USER} WITH PASSWORD \'{password}\'')
cursor.execute(
f'CREATE DATABASE {DB_NAME}')
cursor.execute(
f'GRANT ALL PRIVILEGES ON DATABASE {DB_NAME} TO {DB_USER}')
conn.close()
logging.info('done.')
yield container, password, host
@pytest.fixture
async def skynet_running(postgres_db):
db_container, db_pass, db_host = postgres_db
async with run_skynet(
db_pass=db_pass,
db_host=db_host
):
yield
@pytest.fixture
def dgpu_workers(request, dockerctl, skynet_running):
devices = [DeviceRequest(capabilities=[['gpu']])]
mounts = [Mount(
'/skynet', str(Path().resolve()), type='bind')]
num_containers, initial_algos = request.param
cmds = []
for i in range(num_containers):
cmd = f'''
pip install -e . && \
skynet run dgpu \
--algos=\'{json.dumps(initial_algos)}\' \
--uid=dgpu-{i}
'''
cmds.append(['bash', '-c', cmd])
logging.info(f'launching: \n{cmd}')
with dockerctl.run(
DOCKER_RUNTIME_CUDA,
name='skynet-test-runtime-cuda',
commands=cmds,
environment={
'HF_TOKEN': os.environ['HF_TOKEN'],
'HF_HOME': '/skynet/hf_home'
},
network='host',
mounts=mounts,
device_requests=devices,
num=num_containers
) as containers:
yield containers
#for i, container in enumerate(containers):
# logging.info(f'container {i} logs:')
# logging.info(container.logs().decode())

View File

@ -0,0 +1,416 @@
{
"____comment": "This file was generated with eosio-abigen. DO NOT EDIT ",
"version": "eosio::abi/1.2",
"types": [],
"structs": [
{
"name": "account",
"base": "",
"fields": [
{
"name": "user",
"type": "name"
},
{
"name": "balance",
"type": "asset"
},
{
"name": "nonce",
"type": "uint64"
}
]
},
{
"name": "card",
"base": "",
"fields": [
{
"name": "id",
"type": "uint64"
},
{
"name": "owner",
"type": "name"
},
{
"name": "card_name",
"type": "string"
},
{
"name": "version",
"type": "string"
},
{
"name": "total_memory",
"type": "uint64"
},
{
"name": "mp_count",
"type": "uint32"
},
{
"name": "extra",
"type": "string"
}
]
},
{
"name": "clean",
"base": "",
"fields": []
},
{
"name": "config",
"base": "",
"fields": [
{
"name": "token_contract",
"type": "name"
},
{
"name": "token_symbol",
"type": "symbol"
}
]
},
{
"name": "dequeue",
"base": "",
"fields": [
{
"name": "user",
"type": "name"
},
{
"name": "request_id",
"type": "uint64"
}
]
},
{
"name": "enqueue",
"base": "",
"fields": [
{
"name": "user",
"type": "name"
},
{
"name": "request_body",
"type": "string"
},
{
"name": "binary_data",
"type": "string"
},
{
"name": "reward",
"type": "asset"
},
{
"name": "min_verification",
"type": "uint32"
}
]
},
{
"name": "global_configuration_struct",
"base": "",
"fields": [
{
"name": "token_contract",
"type": "name"
},
{
"name": "token_symbol",
"type": "symbol"
}
]
},
{
"name": "submit",
"base": "",
"fields": [
{
"name": "worker",
"type": "name"
},
{
"name": "request_id",
"type": "uint64"
},
{
"name": "request_hash",
"type": "checksum256"
},
{
"name": "result_hash",
"type": "checksum256"
},
{
"name": "ipfs_hash",
"type": "string"
}
]
},
{
"name": "withdraw",
"base": "",
"fields": [
{
"name": "user",
"type": "name"
},
{
"name": "quantity",
"type": "asset"
}
]
},
{
"name": "work_request_struct",
"base": "",
"fields": [
{
"name": "id",
"type": "uint64"
},
{
"name": "user",
"type": "name"
},
{
"name": "reward",
"type": "asset"
},
{
"name": "min_verification",
"type": "uint32"
},
{
"name": "nonce",
"type": "uint64"
},
{
"name": "body",
"type": "string"
},
{
"name": "binary_data",
"type": "string"
},
{
"name": "timestamp",
"type": "time_point_sec"
}
]
},
{
"name": "work_result_struct",
"base": "",
"fields": [
{
"name": "id",
"type": "uint64"
},
{
"name": "request_id",
"type": "uint64"
},
{
"name": "user",
"type": "name"
},
{
"name": "worker",
"type": "name"
},
{
"name": "result_hash",
"type": "checksum256"
},
{
"name": "ipfs_hash",
"type": "string"
},
{
"name": "submited",
"type": "time_point_sec"
}
]
},
{
"name": "workbegin",
"base": "",
"fields": [
{
"name": "worker",
"type": "name"
},
{
"name": "request_id",
"type": "uint64"
},
{
"name": "max_workers",
"type": "uint32"
}
]
},
{
"name": "workcancel",
"base": "",
"fields": [
{
"name": "worker",
"type": "name"
},
{
"name": "request_id",
"type": "uint64"
},
{
"name": "reason",
"type": "string"
}
]
},
{
"name": "worker",
"base": "",
"fields": [
{
"name": "account",
"type": "name"
},
{
"name": "joined",
"type": "time_point_sec"
},
{
"name": "left",
"type": "time_point_sec"
},
{
"name": "url",
"type": "string"
}
]
},
{
"name": "worker_status_struct",
"base": "",
"fields": [
{
"name": "worker",
"type": "name"
},
{
"name": "status",
"type": "string"
},
{
"name": "started",
"type": "time_point_sec"
}
]
}
],
"actions": [
{
"name": "clean",
"type": "clean",
"ricardian_contract": ""
},
{
"name": "config",
"type": "config",
"ricardian_contract": ""
},
{
"name": "dequeue",
"type": "dequeue",
"ricardian_contract": ""
},
{
"name": "enqueue",
"type": "enqueue",
"ricardian_contract": ""
},
{
"name": "submit",
"type": "submit",
"ricardian_contract": ""
},
{
"name": "withdraw",
"type": "withdraw",
"ricardian_contract": ""
},
{
"name": "workbegin",
"type": "workbegin",
"ricardian_contract": ""
},
{
"name": "workcancel",
"type": "workcancel",
"ricardian_contract": ""
}
],
"tables": [
{
"name": "cards",
"type": "card",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "config",
"type": "global_configuration_struct",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "queue",
"type": "work_request_struct",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "results",
"type": "work_result_struct",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "status",
"type": "worker_status_struct",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "users",
"type": "account",
"index_type": "i64",
"key_names": [],
"key_types": []
},
{
"name": "workers",
"type": "worker",
"index_type": "i64",
"key_names": [],
"key_types": []
}
],
"ricardian_clauses": [],
"variants": [],
"action_results": []
}

Binary file not shown.

View File

@ -0,0 +1,106 @@
#!/usr/bin/env python3
import time
import json
from hashlib import sha256
from functools import partial
import trio
import requests
from skynet.constants import DEFAULT_IPFS_REMOTE
from skynet.dgpu import open_dgpu_node
from leap.sugar import collect_stdout
def test_enqueue_work(cleos):
user = 'telegram'
req = json.dumps({
'method': 'diffuse',
'params': {
'algo': 'midj',
'prompt': 'skynet terminator dystopic',
'width': 512,
'height': 512,
'guidance': 10,
'step': 28,
'seed': 420,
'upscaler': 'x4'
}
})
binary = ''
ec, out = cleos.push_action(
'telos.gpu', 'enqueue', [user, req, binary, '20.0000 GPU', 1], f'{user}@active'
)
assert ec == 0
queue = cleos.get_table('telos.gpu', 'telos.gpu', 'queue')
assert len(queue) == 1
req_on_chain = queue[0]
assert req_on_chain['user'] == user
assert req_on_chain['body'] == req
assert req_on_chain['binary_data'] == binary
trio.run(
partial(
open_dgpu_node,
f'testworker1',
'active',
cleos,
DEFAULT_IPFS_REMOTE,
cleos.private_keys['testworker1'],
initial_algos=['midj']
)
)
queue = cleos.get_table('telos.gpu', 'telos.gpu', 'queue')
assert len(queue) == 0
def test_enqueue_dequeue(cleos):
user = 'telegram'
req = json.dumps({
'method': 'diffuse',
'params': {
'algo': 'midj',
'prompt': 'skynet terminator dystopic',
'width': 512,
'height': 512,
'guidance': 10,
'step': 28,
'seed': 420,
'upscaler': 'x4'
}
})
binary = ''
ec, out = cleos.push_action(
'telos.gpu', 'enqueue', [user, req, binary, '20.0000 GPU', 1], f'{user}@active'
)
assert ec == 0
request_id, _ = collect_stdout(out).split(':')
request_id = int(request_id)
queue = cleos.get_table('telos.gpu', 'telos.gpu', 'queue')
assert len(queue) == 1
ec, out = cleos.push_action(
'telos.gpu', 'dequeue', [user, request_id], f'{user}@active'
)
assert ec == 0
queue = cleos.get_table('telos.gpu', 'telos.gpu', 'queue')
assert len(queue) == 0

View File

@ -1,379 +0,0 @@
#!/usr/bin/python
import io
import time
import json
import zlib
import logging
from typing import Optional
from hashlib import sha256
from functools import partial
import trio
import pytest
import trio_asyncio
from PIL import Image
from google.protobuf.json_format import MessageToDict
from skynet.brain import SkynetDGPUComputeError
from skynet.constants import *
from skynet.frontend import open_skynet_rpc
async def wait_for_dgpus(rpc, amount: int, timeout: float = 30.0):
gpu_ready = False
start_time = time.time()
current_time = time.time()
while not gpu_ready and (current_time - start_time) < timeout:
res = await rpc('dgpu_workers')
if res.result['ok'] >= amount:
break
await trio.sleep(1)
current_time = time.time()
assert (current_time - start_time) < timeout
_images = set()
async def check_request_img(
i: int,
uid: str = '1',
width: int = 512,
height: int = 512,
expect_unique = True,
upscaler: Optional[str] = None
):
global _images
async with open_skynet_rpc(
uid,
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as rpc_call:
res = await rpc_call(
'txt2img', {
'prompt': 'red old tractor in a sunny wheat field',
'step': 28,
'width': width, 'height': height,
'guidance': 7.5,
'seed': None,
'algo': list(ALGOS.keys())[i],
'upscaler': upscaler
})
if 'error' in res.result:
raise SkynetDGPUComputeError(MessageToDict(res.result))
if upscaler == 'x4':
width *= 4
height *= 4
img_raw = zlib.decompress(bytes.fromhex(res.result['img']))
img_sha = sha256(img_raw).hexdigest()
img = Image.frombytes(
'RGB', (width, height), img_raw)
if expect_unique and img_sha in _images:
raise ValueError('Duplicated image sha: {img_sha}')
_images.add(img_sha)
logging.info(f'img sha256: {img_sha} size: {len(img_raw)}')
assert len(img_raw) > 100000
return img
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj'])], indirect=True)
async def test_dgpu_worker_compute_error(dgpu_workers):
'''Attempt to generate a huge image and check we get the right error,
then generate a smaller image to show gpu worker recovery
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 1)
with pytest.raises(SkynetDGPUComputeError) as e:
await check_request_img(0, width=4096, height=4096)
logging.info(e)
await check_request_img(0)
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj', 'stable'])], indirect=True)
async def test_dgpu_workers(dgpu_workers):
'''Generate two images in a single dgpu worker using
two different models.
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 1)
await check_request_img(0)
await check_request_img(1)
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj'])], indirect=True)
async def test_dgpu_worker_upscale(dgpu_workers):
'''Generate two images in a single dgpu worker using
two different models.
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 1)
logging.error('UPSCALE')
img = await check_request_img(0, upscaler='x4')
assert img.size == (2048, 2048)
@pytest.mark.parametrize(
'dgpu_workers', [(2, ['midj'])], indirect=True)
async def test_dgpu_workers_two(dgpu_workers):
'''Generate two images in two separate dgpu workers
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 2)
async with trio.open_nursery() as n:
n.start_soon(check_request_img, 0)
n.start_soon(check_request_img, 0)
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj'])], indirect=True)
async def test_dgpu_worker_algo_swap(dgpu_workers):
'''Generate an image using a non default model
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 1)
await check_request_img(5)
@pytest.mark.parametrize(
'dgpu_workers', [(3, ['midj'])], indirect=True)
async def test_dgpu_rotation_next_worker(dgpu_workers):
'''Connect three dgpu workers, disconnect and check next_worker
rotation happens correctly
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 3)
res = await test_rpc('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == 0
await check_request_img(0)
res = await test_rpc('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == 1
await check_request_img(0)
res = await test_rpc('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == 2
await check_request_img(0)
res = await test_rpc('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == 0
@pytest.mark.parametrize(
'dgpu_workers', [(3, ['midj'])], indirect=True)
async def test_dgpu_rotation_next_worker_disconnect(dgpu_workers):
'''Connect three dgpu workers, disconnect the first one and check
next_worker rotation happens correctly
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 3)
await trio.sleep(3)
# stop worker who's turn is next
for _ in range(2):
ec, out = dgpu_workers[0].exec_run(['pkill', '-INT', '-f', 'skynet'])
assert ec == 0
dgpu_workers[0].wait()
res = await test_rpc('dgpu_workers')
assert 'ok' in res.result
assert res.result['ok'] == 2
async with trio.open_nursery() as n:
n.start_soon(check_request_img, 0)
n.start_soon(check_request_img, 0)
async def test_dgpu_no_ack_node_disconnect(skynet_running):
'''Mock a node that connects, gets a request but fails to
acknowledge it, then check skynet correctly drops the node
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as rpc_call:
res = await rpc_call('dgpu_online')
assert 'ok' in res.result
await wait_for_dgpus(rpc_call, 1)
with pytest.raises(SkynetDGPUComputeError) as e:
await check_request_img(0)
assert 'dgpu failed to acknowledge request' in str(e)
res = await rpc_call('dgpu_workers')
assert 'ok' in res.result
assert res.result['ok'] == 0
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj'])], indirect=True)
async def test_dgpu_timeout_while_processing(dgpu_workers):
'''Stop node while processing request to cause timeout and
then check skynet correctly drops the node.
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 1)
async def check_request_img_raises():
with pytest.raises(SkynetDGPUComputeError) as e:
await check_request_img(0)
assert 'timeout while processing request' in str(e)
async with trio.open_nursery() as n:
n.start_soon(check_request_img_raises)
await trio.sleep(1)
ec, out = dgpu_workers[0].exec_run(
['pkill', '-TERM', '-f', 'skynet'])
assert ec == 0
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj'])], indirect=True)
async def test_dgpu_heartbeat(dgpu_workers):
'''
'''
async with open_skynet_rpc(
'test-ctx',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as test_rpc:
await wait_for_dgpus(test_rpc, 1)
await trio.sleep(120)
@pytest.mark.parametrize(
'dgpu_workers', [(1, ['midj'])], indirect=True)
async def test_dgpu_img2img(dgpu_workers):
async with open_skynet_rpc(
'1',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as rpc_call:
await wait_for_dgpus(rpc_call, 1)
res = await rpc_call(
'txt2img', {
'prompt': 'red old tractor in a sunny wheat field',
'step': 28,
'width': 512, 'height': 512,
'guidance': 7.5,
'seed': None,
'algo': list(ALGOS.keys())[0],
'upscaler': None
})
if 'error' in res.result:
raise SkynetDGPUComputeError(MessageToDict(res.result))
img_raw = res.result['img']
img = zlib.decompress(bytes.fromhex(img_raw))
logging.info(img[:10])
img = Image.open(io.BytesIO(img))
img.save('txt2img.png')
res = await rpc_call(
'img2img', {
'prompt': 'red sports car in a sunny wheat field',
'step': 28,
'img': img_raw,
'guidance': 12,
'seed': None,
'algo': list(ALGOS.keys())[0],
'upscaler': 'x4'
})
if 'error' in res.result:
raise SkynetDGPUComputeError(MessageToDict(res.result))
img_raw = res.result['img']
img = zlib.decompress(bytes.fromhex(img_raw))
logging.info(img[:10])
img = Image.open(io.BytesIO(img))
img.save('img2img.png')

View File

@ -0,0 +1,26 @@
#!/usr/bin/python
from pathlib import Path
async def test_connection(ipfs_client):
await ipfs_client.connect(
'/ip4/169.197.140.154/tcp/4001/p2p/12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv')
peers = await ipfs_client.peers()
assert '12D3KooWKWogLFNEcNNMKnzU7Snrnuj84RZdMBg3sLiQSQc51oEv' in [p['Peer'] for p in peers]
async def test_add_and_pin_file(ipfs_client):
test_file = Path('hello_world.txt')
with open(test_file, 'w+') as file:
file.write('Hello Skynet!')
file_info = await ipfs_client.add(test_file)
file_cid = file_info['Hash']
pin_resp = await ipfs_client.pin(file_cid)
assert file_cid in pin_resp
test_file.unlink()

View File

@ -1,70 +0,0 @@
#!/usr/bin/python
import logging
import trio
import pynng
import pytest
import trio_asyncio
from skynet.brain import run_skynet
from skynet.structs import *
from skynet.frontend import open_skynet_rpc
async def test_skynet(skynet_running):
...
async def test_skynet_attempt_insecure(skynet_running):
with pytest.raises(pynng.exceptions.NNGException) as e:
async with open_skynet_rpc('bad-actor'):
...
assert str(e.value) == 'Connection shutdown'
async def test_skynet_dgpu_connection_simple(skynet_running):
async with open_skynet_rpc(
'dgpu-0',
security=True,
cert_name='whitelist/testing',
key_name='testing'
) as rpc_call:
# check 0 nodes are connected
res = await rpc_call('dgpu_workers')
assert 'ok' in res.result
assert res.result['ok'] == 0
# check next worker is None
res = await rpc_call('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == None
# connect 1 dgpu
res = await rpc_call('dgpu_online')
assert 'ok' in res.result
# check 1 node is connected
res = await rpc_call('dgpu_workers')
assert 'ok' in res.result
assert res.result['ok'] == 1
# check next worker is 0
res = await rpc_call('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == 0
# disconnect 1 dgpu
res = await rpc_call('dgpu_offline')
assert 'ok' in res.result
# check 0 nodes are connected
res = await rpc_call('dgpu_workers')
assert 'ok' in res.result
assert res.result['ok'] == 0
# check next worker is None
res = await rpc_call('dgpu_next')
assert 'ok' in res.result
assert res.result['ok'] == None