mistral-src
megablocks-public
mistral-src | megablocks-public | |
---|---|---|
9 | 5 | |
8,732 | 857 | |
4.1% | 0.1% | |
7.3 | 9.0 | |
about 2 months ago | 5 months ago | |
Jupyter Notebook | ||
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mistral-src
-
Mistral 7B vs. Mixtral 8x7B
A French startup, Mistral AI has released two impressive large language models (LLMs) - Mistral 7B and Mixtral 8x7B. These models push the boundaries of performance and introduce a better architectural innovation aimed at optimizing inference speed and computational efficiency.
-
How to have your own ChatGPT on your machine (and make him discussed with himself)
However, some models are publicly available. Itβs the case for Mistral, a fast, and efficient French model which seems to outperform GPT4 on some tasks. And it is under Apache 2.0 license π.
-
How to Serve LLM Completions in Production
I recommend starting either with llama2 or Mistral. You need to download the pretrained weights and convert them into GGUF format before they can be used with llama.cpp.
-
Stuff we figured out about AI in 2023
> Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!
actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...
Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...
and both of these are SOTA open source models.
-
How Open is Generative AI? Part 2
MistralAI, a French startup, developed a 7.3 billion parameter LLM named Mistral for various applications. Committed to open-sourcing its technology under Apache 2.0, the training dataset details for Mistral remain undisclosed. The Mistral Instruct model was fine-tuned using publicly available instruction datasets from the Hugging Face repository, though specifics about the licenses and potential constraints are not detailed. Recently, MistralAI released Mixtral 8x7B, a model based on the sparse mixture of experts (SMoE) architecture, consisting of several specialized models (likely eight, as suggested by its name) activated as needed.
- Mistral website was just updated
- Mistral AI β open-source models
- Mistral 8x7B 32k model [magnet]
-
Ask HN: Why the LLaMA code base is so short
I was getting into LLM and I pick up some projects. I tried to dive into the code to see what is secret sauce.
But the code is so short to the point there is nothing to really read.
https://github.com/facebookresearch/llama
I then proceed to check https://github.com/mistralai/mistral-src and suprsingly it's same.
What is exactly those codebases? It feels like just download the models.
megablocks-public
-
Mistral releases 8x7 MoE model via torrent
Stark contrast with Google's "all demo no model" approach from earlier this week! Seems to be trained off Stanford's Megablocks: https://github.com/mistralai/megablocks-public
- Megablocks-Public
-
New Mistral models just dropped (magnet links)
Repo: https://github.com/mistralai/megablocks-public
-
Mistral 8x7B 32k model [magnet]
https://github.com/mistralai/megablocks-public
Oddly absent: an over-rehearsed professional release video talking about a revolution in AI.
If people are wondering why there is so much AI activity right around now, it's because the biggest deep learning conference (NeurIPS) is next week.
https://twitter.com/karpathy/status/1733181701361451130
What are some alternatives?
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
bliss - π§ BLISS β a Benchmark for Language Induction from Small Sets
lida - Automatic Generation of Visualizations and Infographics using Large Language Models
llama-mistral - Inference code for Mistral and Mixtral hacked up into original Llama implementation
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
slint - Slint is a declarative GUI toolkit to build native user interfaces for Rust, C++, or JavaScript apps.
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
llama - Inference code for Llama models
text-generation-webui-colab - A colab gradio web UI for running Large Language Models
chameleon-llm - Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap