llmperf-leaderboard
llama.cpp
llmperf-leaderboard | llama.cpp | |
---|---|---|
6 | 788 | |
396 | 59,389 | |
4.5% | - | |
5.6 | 10.0 | |
5 months ago | about 16 hours ago | |
C++ | ||
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llmperf-leaderboard
-
Groq CEO: 'We No Longer Sell Hardware'
> time to first token != tokens per second
I said "and extremely low latency" because I know they are different. TTFT is consistently lower for Groq than for any other provider. Here's some benchmarks: https://github.com/ray-project/llmperf-leaderboard#70b-model...
- LLMPerf Leaderboard
-
Groq
I don't know about GPT 3.5 specifically, but on this independent benchmark (LLMPerf) Groq's time to first token is also lowest:
https://github.com/ray-project/llmperf-leaderboard?tab=readm...
-
Brave Leo now uses Mixtral 8x7B as default
Not so sure about that. Check out https://github.com/ray-project/llmperf-leaderboard
And try mixtral on chat.groq.com
-
Nvidia Unveils RTX 5880 Graphics Card with 14,080 CUDA Cores and 48GB VRAM
> independent performance conformations.
Groq has just been added to the LLMPerf Leaderboard:
https://github.com/ray-project/llmperf-leaderboard#output-to...
(Disclosure: I work for Groq)
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
gpt4all - gpt4all: run open-source LLMs anywhere
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
safetensors - Simple, safe way to store and distribute tensors
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
alpaca-lora - Instruct-tune LLaMA on consumer hardware