Stable-Diffusion
WizardLM
Stable-Diffusion | WizardLM | |
---|---|---|
30 | 38 | |
1,760 | 7,531 | |
- | - | |
9.8 | 9.4 | |
4 days ago | 8 months ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion
- Scalable Load Balancing Having Cloud GPU Service Salad Tutorial With Whisper Transcriber Gradio APP
- FLaNK AI-April 22, 2024
-
OneTrainer Fine Tuning vs Kohya SS DreamBooth & Huge Research of OneTrainer’s Masked Training
So stay subscribed and open notification bells to not miss : https://www.youtube.com/SECourses
-
Finding Best Training Hyper Parameters / Configuration Is Neither Cheap Nor Easy
You can use A6000 GPU on MassedCompute with our template for only 31 cents per hour. Follow instructions here (still WIP) : https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/OneTrainer-Master-SD-1_5-SDXL-Windows-Cloud-Tutorial.md
-
Compared Effect Of Image Captioning For SDXL Fine-tuning / DreamBooth Training for a Single Person, 10.3 GB VRAM via OneTrainer
The tutorial will be on our channel : https://www.youtube.com/SECourses
-
A New Gold Tutorial For RunPod & Linux Users : How To Use Storage Network Volume In RunPod & Latest Version Of Automatic1111
Patreon exclusive posts index
- SUPIR Full Tutorial + 1 Click 12GB VRAM Windows & RunPod / Linux Installer + Batch Upscale + Comparison With Magnific
-
Beware When Buying M2 NVMe SSDs: Netac NV7000, Kioxia Exceria Plus G2, Kingston and Sandisk Compared
Used Writing Speed & Cache Testing Python Script ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/CustomPythonScripts/gen_file.py
- Viral Paper Tested MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
-
56 Stable Diffusion And Related Generative AI Tutorials Organized List
Our 1,200+ Stars GitHub Stable Diffusion and other tutorials repo ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion
WizardLM
- FLaNK AI-April 22, 2024
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
This is interesting work, and a good contribution, but there is no need to mislead people.
[1] https://github.com/nlpxucan/WizardLM
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs everything locally
If you pair this with the latest WizardCoder models, which have a fairly better performance than the standard Salesforce Codegen2 and Codegen2.5, you have a pretty solid alternative to GitHub Copilot that runs completely locally.
- WizardCoder context?
- The world's most-powerful AI model suddenly got 'lazier' and 'dumber.' A radical redesign of OpenAI's GPT-4 could be behind the decline in performance.
-
Official WizardLM-13B-V1.1 Released! Train with Only 1K Data! Can Achieve 86.32% on AlpacaEval!
(We will update the demo links in our github.)
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
WizardLM-13B-V1.0-Uncensored
You talking about this? https://github.com/nlpxucan/WizardLM
-
What 7b llm to use
The smallest model that is close to competent at code is WizardCoder 15B.. https://github.com/nlpxucan/WizardLM/
-
16-Jun-2023
WizardCoder: Empowering Code Large Language Models with Evol-Instruct (https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder)
What are some alternatives?
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
Fooocus - Focus on prompting and generating
llm-humaneval-benchmarks
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
SUPIR - SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild
airoboros - Customizable implementation of the self-instruct paper.
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
CushyStudio - 🛋 The AI and Generative Art platform for everyone
can-ai-code - Self-evaluating interview for AI coders