Deepspeed-Windows
ROCm-docker
Deepspeed-Windows | ROCm-docker | |
---|---|---|
1 | 3 | |
20 | 394 | |
- | 0.5% | |
7.6 | 5.4 | |
3 months ago | 13 days ago | |
C++ | Shell | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Deepspeed-Windows
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I just went through this this weekend - If you're running in Windows and want to use deepspeed, you have to still use Cuda 12.1 because deepspeed 13.1 is the latest that works with 12.1. There's no deepspeed for windows that works with 12.3.
I tried to get it working this weekend but it was a huge PITA so I switched to putting everything into WSL2 then in arch on there pytorch etc in containers so I could flip versions easily.
I'm still working on that part, halfway into it my WSL2 completely broke and I had to reinstall windows. The p9 networking stopped working.
https://github.com/S95Sedan/Deepspeed-Windows
ROCm-docker
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
https://rocm.docs.amd.com/projects/install-on-linux/en/lates... links to ROCm/ROCm-docker: https://github.com/ROCm/ROCm-docker which is the source of docker.io/rocm/rocm-terminal: https://hub.docker.com/r/rocm/rocm-terminal :
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/rocm-terminal
-
Stable Diffusion PR optimizes VRAM, generate 576x1280 images with 6 GB VRAM
Not sure about the 6600, but there is a guide for Linux at least:
https://m.youtube.com/watch?v=d_CgaHyA_n4&feature=emb_logo
And this is somehow relevant (possibly), as I kept the link open.
https://github.com/RadeonOpenCompute/ROCm-docker/issues/38
-
It's working perfectly under Linux
As for the Docker image, I suppose you could compile the image (https://hub.docker.com/r/rocm/pytorch) by yourself using the sources (https://github.com/RadeonOpenCompute/ROCm-docker#building-images), which seems to be quite a bit of work. Better, you could just use an older tag of the upstream image, eg. rocm4.1.1_ubuntu18.04_py3.6_pytorch instead of rocm4.2_ubuntu18.04_py3.6_caffe2 or latest . Just make sure your container version matches your host ROCm version.
What are some alternatives?
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
awesome-kubernetes - A curated list for awesome kubernetes sources :ship::tada:
ROCm - ROCm Website [Moved to: https://github.com/ROCm/ROCm.github.io]
AiDungeon2-Docker-ROCm - Runs an AIDungeon2 fork in Docker on AMD ROCm hardware.
Sunshine - Self-hosted game stream host for Moonlight.
ZLUDA - CUDA on AMD GPUs
hipDNN - A thin wrapper around miOpen and cuDNN
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
docker-elk - The Elastic stack (ELK) powered by Docker and Compose.
Dokku - A docker-powered PaaS that helps you build and manage the lifecycle of applications
Docker-OSX - Run macOS VM in a Docker! Run near native OSX-KVM in Docker! X11 Forwarding! CI/CD for OS X Security Research! Docker mac Containers.
docker-mailserver - Production-ready fullstack but simple mail server (SMTP, IMAP, LDAP, Antispam, Antivirus, etc.) running inside a container.