Stable-Diffusion
text-generation-inference
Stable-Diffusion | text-generation-inference | |
---|---|---|
30 | 29 | |
1,760 | 7,938 | |
- | 6.9% | |
9.8 | 9.6 | |
4 days ago | 8 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion
- Scalable Load Balancing Having Cloud GPU Service Salad Tutorial With Whisper Transcriber Gradio APP
- FLaNK AI-April 22, 2024
-
OneTrainer Fine Tuning vs Kohya SS DreamBooth & Huge Research of OneTrainer’s Masked Training
So stay subscribed and open notification bells to not miss : https://www.youtube.com/SECourses
-
Finding Best Training Hyper Parameters / Configuration Is Neither Cheap Nor Easy
You can use A6000 GPU on MassedCompute with our template for only 31 cents per hour. Follow instructions here (still WIP) : https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/OneTrainer-Master-SD-1_5-SDXL-Windows-Cloud-Tutorial.md
-
Compared Effect Of Image Captioning For SDXL Fine-tuning / DreamBooth Training for a Single Person, 10.3 GB VRAM via OneTrainer
The tutorial will be on our channel : https://www.youtube.com/SECourses
-
A New Gold Tutorial For RunPod & Linux Users : How To Use Storage Network Volume In RunPod & Latest Version Of Automatic1111
Patreon exclusive posts index
- SUPIR Full Tutorial + 1 Click 12GB VRAM Windows & RunPod / Linux Installer + Batch Upscale + Comparison With Magnific
-
Beware When Buying M2 NVMe SSDs: Netac NV7000, Kioxia Exceria Plus G2, Kingston and Sandisk Compared
Used Writing Speed & Cache Testing Python Script ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/CustomPythonScripts/gen_file.py
- Viral Paper Tested MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
-
56 Stable Diffusion And Related Generative AI Tutorials Organized List
Our 1,200+ Stars GitHub Stable Diffusion and other tutorials repo ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion
text-generation-inference
- FLaNK AI-April 22, 2024
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
I wanted to write that TGI inference engine is not Open Source anymore, but they have reverted the license back to Apache 2.0 for the new version TGI v2.0: https://github.com/huggingface/text-generation-inference/rel...
Good news!
- Hugging Face reverts the license back to Apache 2.0
- HuggingFace text-generation-inference is reverting to Apache 2.0 License
- FLaNK Stack 05 Feb 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
AI Code assistant for about 50-70 users
Setting up a server for multiple users is very different from setting up LLM for yourself. A safe bet would be to just use TGI, which supports continuous batching and is very easy to run via Docker on your server. https://github.com/huggingface/text-generation-inference
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
-
Mistral 7B Paper on ArXiv
A simple microservice would be https://github.com/huggingface/text-generation-inference .
Works flawlessly in Docker on my Windows machine, which is extremely shocking.
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
What are some alternatives?
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
llama-cpp-python - Python bindings for llama.cpp
Fooocus - Focus on prompting and generating
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
SUPIR - SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
CushyStudio - 🛋 The AI and Generative Art platform for everyone
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs