ml-safety

Open-source projects categorized as ml-safety

Top 7 ml-safety Open-Source Projects

  • giskard

    🐢 Open-Source Evaluation & Testing for LLMs and ML models

  • Project mention: Show HN: Evaluate LLM-based RAG Applications with automated test set generation | news.ycombinator.com | 2024-04-11
  • natural-adv-examples

    A Harder ImageNet Test Set (CVPR 2021)

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • langtest

    Deliver safe & effective language models

  • Project mention: LangTest: Deliver Safe & Effective Language Models | /r/Python | 2023-11-04
  • ethics

    Aligning AI With Shared Human Values (ICLR 2021)

  • ModelNet40-C

    Repo for "Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions" https://arxiv.org/abs/2201.12296

  • awesome-ai-safety

    📚 A curated list of papers & technical articles on AI Quality & Safety

  • Project mention: Ask HN: Who is hiring? (October 2023) | news.ycombinator.com | 2023-10-02

    Giskard - Testing framework for ML models| Multiple roles | Full-time | France | https://giskard.ai/

    We are building the first collaborative & open-source Quality Assurance platform for all ML models - including Large Language Models.

    Founded in 2021 in Paris by ex-Dataiku engineers, we are an emerging player in the fast-growing market of AI Quality & Safety.

    Giskard helps Data Scientists & ML Engineering teams collaborate to evaluate, test & monitor AI models. We help organizations increase the efficiency of their AI development workflow, eliminate risks of AI biases and ensure robust, reliable & ethical AI models. Our open-source platform is used by dozens of ML teams across industries, both at enterprise companies & startups.

    In 2022, we raised our first round of 1.5 million euros, led by Elaia, with participation from Bessemer and notable angel investors including the CTO of Hugging Face. To read more about this fundraising and how it will accelerate our growth, you can read this announcement: https://www.giskard.ai/knowledge/news-fundraising-2022

    In 2023, we received a strategic investment from the European Commission to build a SaaS platform to automate compliance with the upcoming EU AI regulation. You can read more here: https://www.giskard.ai/knowledge/1-000-github-stars-3meu-and...

    We are assembling a team of champions: Software Engineers, Machine Learning researchers, and Data Scientists ; to build our AI Quality platform and expand it to new types of AI models and industries. We have a culture of continuous learning & quality, and we help each other achieve high standards & goals!

    We aim to grow from 15 to 25 people in the next 12 months. We're hiring the following roles:

  • adversarial-reinforcement-learning

    Reading list for adversarial perspective and robustness in deep reinforcement learning.

  • Project mention: Safety in Deep Reinforcement Learning | /r/programming | 2023-12-06
  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

ml-safety related posts

Index

What are some of the best open-source ml-safety projects? This list will help you:

Project Stars
1 giskard 3,340
2 natural-adv-examples 576
3 langtest 459
4 ethics 207
5 ModelNet40-C 203
6 awesome-ai-safety 140
7 adversarial-reinforcement-learning 77

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com