Mistral website was just updated

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • mistral-src

    Reference implementation of Mistral AI 7B v0.1 model.

  • vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

  • official support for vllm: https://github.com/vllm-project/vllm/pull/2011/files

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • llama.cpp

    LLM inference in C/C++

  • We will have to wait until llama.cpp adds support for Mixtral: https://github.com/ggerganov/llama.cpp/issues/4381

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • AI leaderboards are no longer useful. It's time to switch to Pareto curves

    1 project | news.ycombinator.com | 30 Apr 2024
  • Mistral 7B vs. Mixtral 8x7B

    1 project | dev.to | 26 Mar 2024
  • How to have your own ChatGPT on your machine (and make him discussed with himself)

    1 project | dev.to | 24 Jan 2024
  • VLLM Sacrifices Accuracy for Speed

    1 project | news.ycombinator.com | 23 Jan 2024
  • How to Serve LLM Completions in Production

    1 project | dev.to | 18 Jan 2024