-
langfuse
🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
We struggled with this ourselves while building LLM-based products and then open-sourced our observability/monitoring tool [1]. Many use it to track RAG and agents in production, run custom evals on the production traces (focused on hallucination), and track how metrics are different across releases or customers. Feel free to dm if there is something specific you are looking to solve, happy to help.
[1] https://github.com/langfuse/langfuse
Here's the note I have on that: “For chatbot interfaces, emerging approach is to have another agent simulating the user (as opposed to a more classic approach based on token prediction probs on chat transcripts, what I think you're referencing). Then still use a model for grading. Only place I've seen this so far: https://github.com/Forethought-Technologies/AutoChain/blob/m... ” - AI Startup Founder