C++ llama

Open-source C++ projects categorized as llama

Top 10 C++ llama Projects

  • llama.cpp

    LLM inference in C/C++

  • Project mention: New exponent functions that make SiLU and SoftMax 2x faster, at full acc | news.ycombinator.com | 2024-05-15
  • LocalAI

    :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

  • Project mention: LocalAI: Self-hosted OpenAI alternative reaches 2.14.0 | news.ycombinator.com | 2024-05-03
  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • PowerInfer

    High-speed Large Language Model Serving on PCs with Consumer-grade GPUs

  • Project mention: FLaNK 25 December 2023 | dev.to | 2023-12-26
  • cortex

    Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan (by janhq)

  • Project mention: Introducing Jan | dev.to | 2024-05-05

    Jan incorporates a lightweight, built-in inference server called Nitro. Nitro supports both llama.cpp and NVIDIA's TensorRT-LLM engines. This means many open LLMs in the GGUF format are supported. Jan's Model Hub is designed for easy installation of pre-configured models but it also allows you to install virtually any model from Hugging Face or even your own.

  • LlamaGPTJ-chat

    Simple chat program for LLaMa, GPT-J, and MPT models.

  • Project mention: New to this community, most models I download fail and end up in a core dump | /r/LocalLLaMA | 2023-05-23

    If you want to use that model specifically, check out : https://github.com/kuvaus/LlamaGPTJ-chat

  • llama_cpp.rb

    llama_cpp provides Ruby bindings for llama.cpp

  • Project mention: Llama.cpp: Full CUDA GPU Acceleration | news.ycombinator.com | 2023-06-12

    Python sits on the C-glue segment of programming languages (where Perl, PHP, Ruby and Node are also notable members). Being a glue language means having APIs to a lot of external toolchains written in not only C/C++ but many other compiled languages, APIs and system resources. Conda, virtualenv, etc. are godsend modules for making it all work, or even better, to freeze things once they all work, without resourcing to Docker, VMs or shell scripts. It's meant for application and DevOps people who need to slap together, ie, ML, Numpy, Elasticsearch, AWS APIs and REST endpoints and Get $hit Done.

    It's annoying to see them "glueys" compared to the binary compiled segment where the heavy lifting is done. Python and others exist to latch on and assimilate. Resistance is futile:

    https://pypi.org/project/pyllamacpp/

    https://www.npmjs.com/package/llama-node

    https://packagist.org/packages/kambo/llama-cpp-php

    https://github.com/yoshoku/llama_cpp.rb

  • collider

    Large Model Collider - The Platform for serving LLM models

  • Project mention: Show HN: Collider – the platform for local LLM debug and inference at warp speed | news.ycombinator.com | 2023-11-30
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • pyllamacpp

    Python bindings for llama.cpp

  • llama-server-chat-terminal-client

    Lightweight chat terminal-interface for llama.cpp server compilable for windows and linux.

  • Project mention: Terminal client chat for llama.cpp server. | /r/LocalLLaMA | 2023-12-05
  • llama-chat

    Simple chat program for LLaMa models (by kuvaus)

  • Project mention: Local vicuna AI for low end pc? | /r/LocalLLaMA | 2023-06-21
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

C++ llama related posts

  • New exponent functions that make SiLU and SoftMax 2x faster, at full acc

    2 projects | news.ycombinator.com | 15 May 2024
  • Gemini Flash

    2 projects | news.ycombinator.com | 14 May 2024
  • Ggml: Add Flash Attention

    1 project | news.ycombinator.com | 13 May 2024
  • Structured: Extract Data from Unstructured Input with LLM

    3 projects | dev.to | 10 May 2024
  • IBM Granite: A Family of Open Foundation Models for Code Intelligence

    3 projects | news.ycombinator.com | 7 May 2024
  • Ask HN: Affordable hardware for running local large language models?

    1 project | news.ycombinator.com | 5 May 2024
  • LocalAI: Self-hosted OpenAI alternative reaches 2.14.0

    1 project | news.ycombinator.com | 3 May 2024
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 16 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Index

What are some of the best open-source llama projects in C++? This list will help you:

Project Stars
1 llama.cpp 57,984
2 LocalAI 20,346
3 PowerInfer 6,996
4 cortex 1,635
5 LlamaGPTJ-chat 211
6 llama_cpp.rb 144
7 collider 117
8 pyllamacpp 59
9 llama-server-chat-terminal-client 10
10 llama-chat 7

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com