langkit
gateway
langkit | gateway | |
---|---|---|
5 | 7 | |
743 | 4,830 | |
7.1% | 12.8% | |
8.5 | 9.8 | |
3 days ago | 4 days ago | |
Jupyter Notebook | TypeScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langkit
- FLaNK Stack Weekly 22 January 2024
- LangKit: An open-source toolkit for monitoring LLMs
-
Ask HN: How are you improving your use of LLMs in production?
Would love to hear feedback and thoughts on how people approach monitoring in production in real world applications in general! It's an area that I think not enough people talk about when operating LLMs.
We spent a lot of time working with various companies with GenAI use cases before LLM was a thing and captured them in our library called LangKit - it's designed to be generic and pluggable into many different systems, including langchain: https://github.com/whylabs/langkit/. It's designed beyond prompt engineering and aims to provide automated ways to monitor LLM once deployed. Happy to answer any questions here!
- LangKit: An open-source toolkit for monitoring Language Learning Models (LLMs)
- LangKit: An open-source text metrics toolkit for monitoring LLM
gateway
- Adding a streaming run function to the Assistants API
- FLaNK Stack Weekly 22 January 2024
-
We open sourced our AI gateway written in TS
Portkey's AI Gateway is the interface between your app and hosted LLMs. It streamlines API requests to OpenAI, Anthropic, Mistral, LLama2, Anyscale, Google Gemini and more with a unified API.
- Show HN: A lightweight AI gateway to 100 models, in TS
- Open Source AI Gateway
What are some alternatives?
evals - Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
rebuff - LLM Prompt Injection Detector
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
fill - Generative fill in 3D.
Mermaid - Edit, preview and share mermaid charts/diagrams. New implementation of the live editor.
llmflows - LLMFlows - Simple, Explicit and Transparent LLM Apps
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
VulnerableApp-facade - VulnerableApp-facade is probably most modern lightweight distributed farm of Vulnerable Applications built for handling wide range of vulnerabilities across tech stacks.
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]
agenta - The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.
renovate - Home of the Renovate CLI: Cross-platform Dependency Automation by Mend.io
OpenPipe - Turn expensive prompts into cheap fine-tuned models