-
example-chroma-vector-embeddings
Example project for using chroma to store and query vector embeddings
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
Chroma is an open-source embedding database designed to store and query vector embeddings efficiently, enhancing Large Language Models (LLMs) by providing relevant context to user inquiries. In this tutorial, I will explain how to use Chroma in persistent server mode using a custom embedding model within an example Python project. The companion code repository for this blog post is available on GitHub.
Create a new project directory for our example project. Next, we need to clone the Chroma repository to get started. At the root of your project directory let's clone Chroma into it:
This will set up Chroma and run it as a server with uvicorn, making port 8000 accessible outside the net docker network. The command also mounts a persistent docker volume for Chroma's database, found at chroma/chroma from your project's root.
Related posts
-
How to Deploy a Fast API Application to a Kubernetes Cluster using Podman and Minikube
-
LangChain, Python, and Heroku
-
Fun with Avatars: Crafting the core engine | Part. 1
-
Ask HN: Where to Host a FastAPI App
-
Unresolved Memory Management Issues in FastAPI/Starlette/Uvicorn/Python During High-Load Scenarios