LLamaSharp
LLamaStack
LLamaSharp | LLamaStack | |
---|---|---|
3 | 1 | |
2,126 | 32 | |
12.0% | - | |
9.8 | 10.0 | |
3 days ago | 7 months ago | |
C# | C# | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLamaSharp
-
This is getting really complicated.
For example, I have my own task and I need another tool, so I search and find what I need. https://github.com/SciSharp/LLamaSharp and this allows me to take the next step https://github.com/Xsanf/LLaMa_Unity . I can already run LLM on Unity. And this is already an opportunity to use it in games natively.
-
cannot for the life of me compile libllama.dll
I searched through GitHub and nothing comes up that is new. I wanted to run the model through the C# wrapper linked on LLaMASharp which requires compiling llama.cpp and extracting the libllama dll into the C# project files. When I build llama.cpp with OpenBLAS, everything shows up fine in the command line. Just as the link suggests I make sure to set DBUILD_SHARED_LIBS=ON when in CMake. However, the output in the Visual Studio Developer Command Line interface ignores the setup for libllama.dll in the CMakeFiles.txt entirely. The only dll to compile is llama.dll; I know this is a fairly technical question but does anyone know how to fix?
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
C#/.NET: SciSharp/LLamaSharp
LLamaStack
What are some alternatives?
SillyTavern - LLM Frontend for Power Users.
llama.go - llama.go is like llama.cpp in pure Golang!
llama.cpp-dotnet - Minimal C# bindings for llama.cpp + .NET core library with API host/client.
gpu_poor - Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
langchain-alpaca - Run Alpaca LLM in LangChain
SciSharp-Stack-Examples - Practical examples written in SciSharp's machine learning libraries
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
PerroPastor - Run Llama based LLMs in Unity entirely in compute shaders with no dependencies
llama-node - Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
go-llama.cpp - LLama.cpp golang bindings