mistral-src
llama-mistral
mistral-src | llama-mistral | |
---|---|---|
9 | 5 | |
8,732 | 374 | |
4.1% | - | |
7.3 | 8.4 | |
about 2 months ago | 5 months ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mistral-src
-
Mistral 7B vs. Mixtral 8x7B
A French startup, Mistral AI has released two impressive large language models (LLMs) - Mistral 7B and Mixtral 8x7B. These models push the boundaries of performance and introduce a better architectural innovation aimed at optimizing inference speed and computational efficiency.
-
How to have your own ChatGPT on your machine (and make him discussed with himself)
However, some models are publicly available. Itβs the case for Mistral, a fast, and efficient French model which seems to outperform GPT4 on some tasks. And it is under Apache 2.0 license π.
-
How to Serve LLM Completions in Production
I recommend starting either with llama2 or Mistral. You need to download the pretrained weights and convert them into GGUF format before they can be used with llama.cpp.
-
Stuff we figured out about AI in 2023
> Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!
actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...
Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...
and both of these are SOTA open source models.
-
How Open is Generative AI? Part 2
MistralAI, a French startup, developed a 7.3 billion parameter LLM named Mistral for various applications. Committed to open-sourcing its technology under Apache 2.0, the training dataset details for Mistral remain undisclosed. The Mistral Instruct model was fine-tuned using publicly available instruction datasets from the Hugging Face repository, though specifics about the licenses and potential constraints are not detailed. Recently, MistralAI released Mixtral 8x7B, a model based on the sparse mixture of experts (SMoE) architecture, consisting of several specialized models (likely eight, as suggested by its name) activated as needed.
- Mistral website was just updated
- Mistral AI β open-source models
- Mistral 8x7B 32k model [magnet]
-
Ask HN: Why the LLaMA code base is so short
I was getting into LLM and I pick up some projects. I tried to dive into the code to see what is secret sauce.
But the code is so short to the point there is nothing to really read.
https://github.com/facebookresearch/llama
I then proceed to check https://github.com/mistralai/mistral-src and suprsingly it's same.
What is exactly those codebases? It feels like just download the models.
llama-mistral
- Inference code for Mistral and Mixtral hacked up
-
French AI startup Mistral secures β¬2B valuation
No. Without the inference code, the best we can have are guesses on its implementation, so the benchmark figures we can get could be quite wrong. It does seem better than Llama2-70B in my tests, which rely on the work done by Dmytro Dzhulgakov[0] and DiscoResearch[1].
But the point of releasing on bittorrent is to see the effervescence in hobbyist research and early attempts at MoE quantization, which are already ongoing[2]. They are benefitting from the community.
[0]: https://github.com/dzhulgakov/llama-mistral
[1]: https://huggingface.co/DiscoResearch/mixtral-7b-8expert
[2]: https://github.com/TimDettmers/bitsandbytes/tree/sparse_moe
- Code to run Mistral - mixtral-8x7b-32kseqlen
-
New Mistral models just dropped (magnet links)
Someone made this. https://github.com/dzhulgakov/llama-mistral
-
Mistral 8x7B 32k model [magnet]
If anyone can help running this, would be appreciated. Resources so far:
- https://github.com/dzhulgakov/llama-mistral
What are some alternatives?
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
llama.cpp - LLM inference in C/C++
lida - Automatic Generation of Visualizations and Infographics using Large Language Models
megablocks-public
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
llama - Inference code for Llama models
text-generation-webui-colab - A colab gradio web UI for running Large Language Models
chameleon-llm - Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap