Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 10 C++ stable-diffusion Projects
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
FastDeploy
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
-
OnnxStream
Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.
-
cortex
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan (by janhq)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: LocalAI: Self-hosted OpenAI alternative reaches 2.14.0 | news.ycombinator.com | 2024-05-03
Project mention: I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now. | /r/FlutterDev | 2023-12-08I did start with integrating SD with this repo: https://github.com/leejet/stable-diffusion.cpp
Project mention: Show HN: OnnxStream running TinyLlama and Mistral 7B, with CUDA support | news.ycombinator.com | 2024-01-14
Jan incorporates a lightweight, built-in inference server called Nitro. Nitro supports both llama.cpp and NVIDIA's TensorRT-LLM engines. This means many open LLMs in the GGUF format are supported. Jan's Model Hub is designed for easy installation of pre-configured models but it also allows you to install virtually any model from Hugging Face or even your own.
Project mention: Stable Diffusion implemented by ncnn framework based on C++, supported txt2img and img2img! | /r/StableDiffusion | 2023-06-08
Project mention: I ported Stable Diffusion onto Xbox Series X and S. | /r/StableDiffusion | 2023-06-10Here are the details: Running Unpaint on the Xbox Series consoles · axodox/unpaint Wiki (github.com)
C++ stable-diffusion related posts
-
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
-
Show HN: OnnxStream running TinyLlama and Mistral 7B, with CUDA support
-
OnnxStream running TinyLlama and Mistral 7B, with CUDA support
-
Running Stable Diffusion in 260MB of RAM
-
Running Stable Diffusion in 260MB of RAM
-
Running Stable Diffusion in 260MB of RAM
-
Running Stable Diffusion in 260MB of RAM!
-
A note from our sponsor - InfluxDB
www.influxdata.com | 16 May 2024
Index
What are some of the best open-source stable-diffusion projects in C++? This list will help you:
Project | Stars | |
---|---|---|
1 | LocalAI | 20,346 |
2 | openvino | 5,996 |
3 | FastDeploy | 2,742 |
4 | stable-diffusion.cpp | 2,647 |
5 | OnnxStream | 1,754 |
6 | cortex | 1,635 |
7 | Stable-Diffusion-NCNN | 940 |
8 | unpaint | 260 |
9 | SDImageGenerator | 24 |
10 | diffusion-expert | 22 |
Sponsored