SaaSHub helps you find the best software and product alternatives Learn more →
Top 12 C++ Onnx Projects
-
ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
FastDeploy
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
-
OnnxStream
Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
deepC
vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
-
vs-mlrt
Efficient CPU/GPU/Vulkan ML Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2/v3, Real-CUGAN, RIFE, SCUNet and more!)
-
Onnx2Text
Converts an ONNX ML model protobuf from/to text, or tensor from/to text/CSV/raw data. (Windows command line tool)
Project mention: AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source | news.ycombinator.com | 2024-02-12ncnn uses Vulkan for GPU acceleration, I've seen it used in a few projects to get AMD hardware support.
https://github.com/Tencent/ncnn
Project mention: New exponent functions that make SiLU and SoftMax 2x faster, at full acc | news.ycombinator.com | 2024-05-15
Project mention: Introducing Cellulose - an ONNX model visualizer with hardware runtime support annotations | /r/tensorflow | 2023-05-23[1] - We use onnx-tensorrt for this TensorRT compatibility checks.
Project mention: Show HN: OnnxStream running TinyLlama and Mistral 7B, with CUDA support | news.ycombinator.com | 2024-01-14
Project mention: Stable Diffusion implemented by ncnn framework based on C++, supported txt2img and img2img! | /r/StableDiffusion | 2023-06-08
If you happen to start with an ONNX model that you still want to optimize, then you can use the official ONNX optimizer tool https://github.com/onnx/optimizer.
Project mention: [D] Run Pytorch model inference on Microcontroller | /r/MachineLearning | 2023-11-14DeepC. Open source version of DeepSea. Very little activity, looks abandoned
or whatever you want, you need to write the code yourself though. https://github.com/AmusementClub/vs-mlrt
C++ Onnx related posts
-
New exponent functions that make SiLU and SoftMax 2x faster, at full acc
-
Show HN: OnnxStream running TinyLlama and Mistral 7B, with CUDA support
-
OnnxStream running TinyLlama and Mistral 7B, with CUDA support
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
-
ONNX runtime: Cross-platform accelerated machine learning
-
Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
-
Running Stable Diffusion in 260MB of RAM
-
A note from our sponsor - SaaSHub
www.saashub.com | 18 May 2024
Index
What are some of the best open-source Onnx projects in C++? This list will help you:
Project | Stars | |
---|---|---|
1 | ncnn | 19,352 |
2 | onnxruntime | 12,894 |
3 | onnx-simplifier | 3,585 |
4 | onnx-tensorrt | 2,772 |
5 | FastDeploy | 2,742 |
6 | OnnxStream | 1,754 |
7 | hls4ml | 1,118 |
8 | Stable-Diffusion-NCNN | 943 |
9 | optimizer | 603 |
10 | deepC | 526 |
11 | vs-mlrt | 236 |
12 | Onnx2Text | 15 |
Sponsored