dspy
MLflow
dspy | MLflow | |
---|---|---|
25 | 57 | |
12,030 | 17,475 | |
13.0% | 1.1% | |
9.9 | 9.9 | |
3 days ago | 7 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dspy
-
Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy!
Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
-
Pydantic Logfire
I’ve observed that Pydantic - which we’ve used for years in our API stack - has become very popular in LLM applications, for its type-adjacent features. It serves as a foundational technology for prompting libraries like [DSPy](https://github.com/stanfordnlp/dspy) which are abstracting “up the stack” of LLM apps. (some opinions there)
Operating AI apps reveals a big challenge, in that debugging probabilistic code paths requires more than the usual introspective abilities, and in an environment where function calls can have very real monetary impact we have to be able to see what’s happening in the runtime. See LangChain’s hosted solution (can’t recall the name) that allows an operator to see prompts and responses “on the wire”. (It just occurred to me that Langchain and Pydantic have a lot in common here, in approach.)
Having a coupling between Pydantic - which is *just about* the data layer itself - and an observability tool seems very interesting to me, and having this come from the folks who built it does not seem unreasonable. WRT open source and monetization, I would be lying if I said I wasn’t a little worried - given the recent few months - but I am choosing to see this in a positive light, given this team’s “believability weight” (to overuse Dalio) and history of delivering solid and really useful tooling.
- Ask HN: Most efficient way to fine-tune an LLM in 2024?
-
Princeton group open sources "SWE-agent", with 12.3% fix rate for GitHub issues
DSPy is the best tool for optimizing prompts [0]: https://github.com/stanfordnlp/dspy
Think of it as a meta-prompt optimizer, it uses a LLM to optimize your prompts, to optimize your LLM.
-
Winner of the SF Mistral AI Hackathon: Automated Test Driven Prompting
Isn’t this just a very naive implementation of what DsPY does?
https://github.com/stanfordnlp/dspy
I don’t understand what is exceptional here.
-
Show HN: Fructose, LLM calls as strongly typed functions
Have you done any comparison with DSPy ? (https://github.com/stanfordnlp/dspy)
Feels very similiar to DSPy except you dont have optimizations yet. But I like your API and the programming model your are enforcing through this.
-
AI Prompt Engineering Is Dead
I'm interested in hearing if anyone has used DSPy (https://github.com/stanfordnlp/dspy) just for prompt optimization for GPT-3.5 or GPT-4. Was it worth the effort and much better than manual prompt iteration? Was the optimized prompt some weird incantation? Any other insights?
-
Ask HN: Are you using a GPT to prompt-engineer another GPT?
You should check out x.com/lateinteraction's DSPy — which is like an optimizer for prompts — https://github.com/stanfordnlp/dspy
- SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
- FLaNK Stack Weekly for 12 September 2023
MLflow
-
Observations on MLOps–A Fragmented Mosaic of Mismatched Expectations
How can this be? The current state of practice in AI/ML work requires adaptivity, which is uncommon in classical computational fields. There are myriad tools that capture the work across the many instances of the AI/ML lifecycle. The idea that any one tool could sufficiently capture the dynamic work is unrealistic. Take, for example, an experiment tracking tool like W&B or MLFlow; some form of experiment tracking is necessary in typical model training lifecycles. Such a tool requires some notion of a dataset. However, a tool focusing on experiment tracking is orthogonal to the needs of analyzing model performance at the data sample level, which is critical to understanding the failure modes of models. The way one does this depends on the type of data and the AI/ML task at hand. In other words, MLOps is inherently an intricate mosaic, as the capabilities and best practices of AI/ML work evolve.
-
My Favorite DevTools to Build AI/ML Applications!
MLflow is an open-source platform for managing the end-to-end machine learning lifecycle. It includes features for experiment tracking, model versioning, and deployment, enabling developers to track and compare experiments, package models into reproducible runs, and manage model deployment across multiple environments.
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Platforms such as MLflow monitor the development stages of machine learning models. In parallel, Data Version Control (DVC) brings version control system-like functions to the realm of data sets and models.
-
cascade alternatives - clearml and MLflow
3 projects | 1 Nov 2023
-
EL5: Difference between OpenLLM, LangChain, MLFlow
MLFlow - http://mlflow.org
- Explain me how websites like Dall-E, chatgpt, thispersondoesntexit process the user data so quickly
- [D] What licensed software do you use for machine learning experimentation tracking?
-
Exploring MLOps Tools and Frameworks: Enhancing Machine Learning Operations
MLflow:
-
Options for configuration of python libraries - Stack Overflow
In search for a tool that needs comparable configuration I looked into mlflow and found this. https://github.com/mlflow/mlflow/blob/master/mlflow/environment_variables.py There they define a class _EnvironmentVariable and create many objects out of it, for any variable they need. The get method of this class is in principle a decorated os.getenv. Maybe that is something I can take as orientation.
-
[D] Is there a tool to keep track of my ML experiments?
I have been using DVC and MLflow since then DVC had only data tracking and MLflow only model tracking. I can say both are awesome now and maybe the only factor I would like to mention is that IMO, MLflow is a bit harder to learn while DVC is just a git practically.
What are some alternatives?
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
open-interpreter - A natural language interface for computers
Sacred - Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.
playground - Play with neural networks!
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
FastMJPG - FastMJPG is a command line tool for capturing, sending, receiving, rendering, piping, and recording MJPG video with extremely low latency. It is optimized for running on constrained hardware and battery powered devices.
guildai - Experiment tracking, ML developer tools
prompt-engine-py - A utility library for creating and maintaining prompts for Large Language Models
dvc - 🦉 ML Experiments and Data Management with Git
AgentOoba - An autonomous AI agent extension for Oobabooga's web ui
tensorflow - An Open Source Machine Learning Framework for Everyone