ort
tract
ort | tract | |
---|---|---|
7 | 20 | |
629 | 2,078 | |
18.8% | 2.5% | |
9.4 | 9.8 | |
6 days ago | 8 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache 2.0/MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ort
-
AI Inference now available in Supabase Edge Functions
To solve this, we built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper called Ort:
-
AI Inference Now Available in Supabase Edge Functions
hey hn, supabase ceo here
As the post points out, this comes in 2 parts:
1. Embeddings models for RAG workloads (specifically pgvector). Available today.
2. Large Language Models for GenAI workloads. This will be progressively rolled out as we get our hands on more GPUs.
We've always had a focus on architectures that can run anywhere (especially important for local dev and self-hosting). In that light, we've found that the Ollama[0] tooling is really unbeatable. I heard one of our engineers explain it like "docker for models" which I think is apt.
To support models that work best with GPUs, we're running them with Fly GPUs - pretty much this: https://fly.io/blog/scaling-llm-ollama (and then we stitch a native API around it). The plan is that you will be able to "BYO" model server and point the Edge Runtime towards it using simple env vars / config.
We've also made improvements for CPU models. We built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper, Ort[1]. We have the models stored on disk, so there is no downloading, cold-boot, etc.
The thing I most like about this set up is that you can now use Edge Functions like background workers for your Postgres database, offloading heavy compute for generating embeddings. For example, you can trigger the worker when a user inserts some text, and then the worker will asynchronously create the embedding and store it back into your database.
I'll be around if there are any questions.
[0] ollama.com
[1] Ort: https://github.com/pykeio/ort
-
Moving from Typescript and Langchain to Rust and Loops
In the quest for more efficient solutions, the ONNX runtime emerged as a beacon of performance. The decision to transition from Typescript to Rust was an unconventional yet pivotal one. Driven by Rust's robust parallel processing capabilities using Rayon and seamless integration with ONNX through the ort crate, Repo-Query unlocked a realm of unparalleled efficiency. The result? A transformation from sluggish processing to, I have to say it, blazing-fast performance.
-
How to create YOLOv8-based object detection web service using Python, Julia, Node.js, JavaScript, Go and Rust
ort - ONNX runtime library.
-
Do you use Rust in your professional career?
Our main model in Rust is a deep neural network, using ONNX via the ort rust bindings. The application is some particular applications of process automation.
-
onnxruntime
You could try ort https://github.com/pykeio/ort It looks like it's in active development and supports GPU inference
-
Deep Learning in Rust: Burn 0.4.0 released and plans for 2023
I would't try to distribute your ml models with the typical frameworks, especially not with python. Have you looked in to ONNX?For example: https://github.com/pykeio/ort
tract
-
Are there any ML crates that would compile to WASM?
Tract is the most well known ML crate in Rust, which I believe can compile to WASM - https://github.com/sonos/tract/. Burn may also be useful - https://github.com/burn-rs/burn.
-
[Discussion] What crates would you like to see?
tract!!
-
tract VS burn - a user suggested alternative
2 projects | 25 Mar 2023
-
Machine Learning Inference Server in Rust?
we use tract for inference, integrated into our runtime and services.
- onnxruntime
- Rust Native ML Frameworks?
-
Neural networks - what crates to use?
Not for training, but for inference this looks nice: https://github.com/sonos/tract
-
Brain.js: GPU Accelerated Neural Networks in JavaScript
There's also tract, from sonos[0]. 100% rust.
I'm currently trying to use it to do speech recognition with a variant of the Conformer architecture (exported to ONNX).
The final goal is to do it in WASM client-side.
[0] https://github.com/sonos/tract
-
Serving ML at the Speed of Rust
As the article notes, there isn't any official Rust-native support for any common frameworks.
tract (https://github.com/sonos/tract) seems like the most mature for ONNX (for which TF/PT export is good nowadays), and recently it successfully implemented BERT.
-
Run deep neural network models from scratch
There are some DL libraries written in Rust: https://github.com/sonos/tract , https://docs.rs/neuronika/latest/neuronika/index.html . The second one could be used for training, I think.
What are some alternatives?
onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
yolov8_onnx_go - YOLOv8 Inference using Go
MTuner - MTuner is a C/C++ memory profiler and memory leak finder for Windows, PlayStation 4 and 3, Android and other platforms
onnxruntime-php - Run ONNX models in PHP
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
yolov8_onnx_javascript - YOLOv8 inference using Javascript
ncurses-rs - A low-level ncurses wrapper for Rust
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
linfa - A Rust machine learning framework.
yolov8_onnx_julia - YOLOv8 inference using Julia
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.