mistral-src
text-generation-webui-colab
mistral-src | text-generation-webui-colab | |
---|---|---|
9 | 4 | |
8,732 | 2,037 | |
4.1% | - | |
7.3 | 8.7 | |
about 2 months ago | 5 months ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | The Unlicense |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mistral-src
-
Mistral 7B vs. Mixtral 8x7B
A French startup, Mistral AI has released two impressive large language models (LLMs) - Mistral 7B and Mixtral 8x7B. These models push the boundaries of performance and introduce a better architectural innovation aimed at optimizing inference speed and computational efficiency.
-
How to have your own ChatGPT on your machine (and make him discussed with himself)
However, some models are publicly available. It’s the case for Mistral, a fast, and efficient French model which seems to outperform GPT4 on some tasks. And it is under Apache 2.0 license 😊.
-
How to Serve LLM Completions in Production
I recommend starting either with llama2 or Mistral. You need to download the pretrained weights and convert them into GGUF format before they can be used with llama.cpp.
-
Stuff we figured out about AI in 2023
> Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!
actually its not just a basic version. Llama 1/2's model.py is 500 lines: https://github.com/facebookresearch/llama/blob/main/llama/mo...
Mistral (is rumored to have) forked llama and is 369 lines: https://github.com/mistralai/mistral-src/blob/main/mistral/m...
and both of these are SOTA open source models.
-
How Open is Generative AI? Part 2
MistralAI, a French startup, developed a 7.3 billion parameter LLM named Mistral for various applications. Committed to open-sourcing its technology under Apache 2.0, the training dataset details for Mistral remain undisclosed. The Mistral Instruct model was fine-tuned using publicly available instruction datasets from the Hugging Face repository, though specifics about the licenses and potential constraints are not detailed. Recently, MistralAI released Mixtral 8x7B, a model based on the sparse mixture of experts (SMoE) architecture, consisting of several specialized models (likely eight, as suggested by its name) activated as needed.
- Mistral website was just updated
- Mistral AI – open-source models
- Mistral 8x7B 32k model [magnet]
-
Ask HN: Why the LLaMA code base is so short
I was getting into LLM and I pick up some projects. I tried to dive into the code to see what is secret sauce.
But the code is so short to the point there is nothing to really read.
https://github.com/facebookresearch/llama
I then proceed to check https://github.com/mistralai/mistral-src and suprsingly it's same.
What is exactly those codebases? It feels like just download the models.
text-generation-webui-colab
- Text-Generation-Webui-Colab
-
what uncensored roleplay chatbot does to a mf
here
-
EdgeGPT on colab
Hello people, I just wanted to say that if you wanted to use EdgeGPT but couldn't, now you can using my colab (it works without cookies). I merged camenderu's and this colab, and added some tweaks too; it has an easy way to download extensions and all models that don't require to change webui files.
- What am i doing wrong? OPT-2.7b
What are some alternatives?
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lida - Automatic Generation of Visualizations and Infographics using Large Language Models
Local-LLM-Langchain - Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and KoboldAI versions of the langchain notebooks with examples.
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
gpt-j-fine-tuning-example - Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
KoAlpaca - KoAlpaca: 한국어 명령어를 이해하는 오픈소스 언어모델
llama - Inference code for Llama models
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
chameleon-llm - Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
comfyui-colab - comfyui colabs templates new nodes