mistral-src
ReAct
mistral-src | ReAct | |
---|---|---|
9 | 1 | |
8,732 | 1,597 | |
4.1% | - | |
7.3 | 4.8 | |
about 2 months ago | 3 months ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mistral-src
-
Mistral 7B vs. Mixtral 8x7B
A French startup, Mistral AI has released two impressive large language models (LLMs) - Mistral 7B and Mixtral 8x7B. These models push the boundaries of performance and introduce a better architectural innovation aimed at optimizing inference speed and computational efficiency.
-
How to have your own ChatGPT on your machine (and make him discussed with himself)
However, some models are publicly available. Itβs the case for Mistral, a fast, and efficient French model which seems to outperform GPT4 on some tasks. And it is under Apache 2.0 license π.
-
How to Serve LLM Completions in Production
I recommend starting either with llama2 or Mistral. You need to download the pretrained weights and convert them into GGUF format before they can be used with llama.cpp.
-
Stuff we figured out about AI in 2023
> Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!
actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...
Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...
and both of these are SOTA open source models.
-
How Open is Generative AI? Part 2
MistralAI, a French startup, developed a 7.3 billion parameter LLM named Mistral for various applications. Committed to open-sourcing its technology under Apache 2.0, the training dataset details for Mistral remain undisclosed. The Mistral Instruct model was fine-tuned using publicly available instruction datasets from the Hugging Face repository, though specifics about the licenses and potential constraints are not detailed. Recently, MistralAI released Mixtral 8x7B, a model based on the sparse mixture of experts (SMoE) architecture, consisting of several specialized models (likely eight, as suggested by its name) activated as needed.
- Mistral website was just updated
- Mistral AI β open-source models
- Mistral 8x7B 32k model [magnet]
-
Ask HN: Why the LLaMA code base is so short
I was getting into LLM and I pick up some projects. I tried to dive into the code to see what is secret sauce.
But the code is so short to the point there is nothing to really read.
https://github.com/facebookresearch/llama
I then proceed to check https://github.com/mistralai/mistral-src and suprsingly it's same.
What is exactly those codebases? It feels like just download the models.
ReAct
What are some alternatives?
lida - Automatic Generation of Visualizations and Infographics using Large Language Models
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
EasyEdit - An Easy-to-use Knowledge Editing Framework for LLMs.
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
LLM-Training-Puzzles - What would you do with 1000 H100s...
llama - Inference code for Llama models
AutoCog - Automaton & Cognition
text-generation-webui-colab - A colab gradio web UI for running Large Language Models
llm-search - Querying local documents, powered by LLM
chameleon-llm - Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
FastLoRAChat - Instruct-tune LLaMA on consumer hardware with shareGPT data