ort
supabase
ort | supabase | |
---|---|---|
7 | 772 | |
629 | 67,176 | |
18.8% | 3.9% | |
9.4 | 10.0 | |
6 days ago | 3 days ago | |
Rust | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ort
-
AI Inference now available in Supabase Edge Functions
To solve this, we built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper called Ort:
-
AI Inference Now Available in Supabase Edge Functions
hey hn, supabase ceo here
As the post points out, this comes in 2 parts:
1. Embeddings models for RAG workloads (specifically pgvector). Available today.
2. Large Language Models for GenAI workloads. This will be progressively rolled out as we get our hands on more GPUs.
We've always had a focus on architectures that can run anywhere (especially important for local dev and self-hosting). In that light, we've found that the Ollama[0] tooling is really unbeatable. I heard one of our engineers explain it like "docker for models" which I think is apt.
To support models that work best with GPUs, we're running them with Fly GPUs - pretty much this: https://fly.io/blog/scaling-llm-ollama (and then we stitch a native API around it). The plan is that you will be able to "BYO" model server and point the Edge Runtime towards it using simple env vars / config.
We've also made improvements for CPU models. We built a native extension in Edge Runtime that enables using ONNX runtime via the Rust interface. This was made possible thanks to an excellent Rust wrapper, Ort[1]. We have the models stored on disk, so there is no downloading, cold-boot, etc.
The thing I most like about this set up is that you can now use Edge Functions like background workers for your Postgres database, offloading heavy compute for generating embeddings. For example, you can trigger the worker when a user inserts some text, and then the worker will asynchronously create the embedding and store it back into your database.
I'll be around if there are any questions.
[0] ollama.com
[1] Ort: https://github.com/pykeio/ort
-
Moving from Typescript and Langchain to Rust and Loops
In the quest for more efficient solutions, the ONNX runtime emerged as a beacon of performance. The decision to transition from Typescript to Rust was an unconventional yet pivotal one. Driven by Rust's robust parallel processing capabilities using Rayon and seamless integration with ONNX through the ort crate, Repo-Query unlocked a realm of unparalleled efficiency. The result? A transformation from sluggish processing to, I have to say it, blazing-fast performance.
-
How to create YOLOv8-based object detection web service using Python, Julia, Node.js, JavaScript, Go and Rust
ort - ONNX runtime library.
-
Do you use Rust in your professional career?
Our main model in Rust is a deep neural network, using ONNX via the ort rust bindings. The application is some particular applications of process automation.
-
onnxruntime
You could try ort https://github.com/pykeio/ort It looks like it's in active development and supports GPU inference
-
Deep Learning in Rust: Burn 0.4.0 released and plans for 2023
I would't try to distribute your ml models with the typical frameworks, especially not with python. Have you looked in to ONNX?For example: https://github.com/pykeio/ort
supabase
-
Wasp x Supabase: Smokin’ Hot Full-Stack Combo 🌶️ 🔥
It was a great experience using Supabase’s rock-solid PostgreSQL database for this app. The DX around that product is phenomenal: viewing and managing the DB data was a lifesaver when you don’t want to craft your own admin panel from scratch.
-
How I migrated from Firebase to Supabase
I didn't really give much thought as to which backend I would use. I already had 2 projects in Supabase (BOXCUT & MineWork), but also a few projects in Firebase too. I was more concerned at the time at actually building the product.
-
How to get free Postgres
Sign up for SupaBase: Head over to SupaBase and sign up. Create a new workspace and project with your preferred names.
-
Creating a Pokémon guessing game using Supabase, Drizzle, and Next.js in just 2 hours!
Setting up Supabase Create a new Supabase project, and get the connection string for the database from settings > database.
-
How To Make An Insanely Fast AI App (Supabase, LLAMA 3 and Groq)
Supabase (start for free)
-
Building a self-creating website with Supabase and AI
Built with Supabase, Astro, Unreal Speech, Stable Diffusion, Replicate, Metropolitan Museum of Art
-
How I built a Markdown Rendered Blog using Supabase and Chakra UI
Supabase will be used for storing article data in the database and the cover image of the article in storage. Chakra UI will be used to provide style to the elements. By using both, we can build the blog with ease.
-
I got #1 Product of the Day on Product Hunt without Spending a Dollar
For AutoRepurpose, I opted for Supabase as the backbone of the backend. It has reliably supported Penelope AI, which garnered over 15k users in 2022 without any issues.
-
AI Inference now available in Supabase Edge Functions
Semantic search demo
-
Creating an OG image using React and Netlify Edge Functions
1. Create a new Supabase project: Visit Supabase and create a new project.
What are some alternatives?
onnxruntime-rs - Rust wrapper for Microsoft's ONNX Runtime (version 1.8)
Appwrite - Your backend, minus the hassle.
yolov8_onnx_go - YOLOv8 Inference using Go
pocketbase - Open Source realtime backend in 1 file
onnxruntime-php - Run ONNX models in PHP
nhost - The Open Source Firebase Alternative with GraphQL.
yolov8_onnx_javascript - YOLOv8 inference using Javascript
neon - Neon: Serverless Postgres. We separated storage and compute to offer autoscaling, code-like database branching, and scale to zero.
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
next-auth - Authentication for the Web.
yolov8_onnx_julia - YOLOv8 inference using Julia
Hasura - Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events.