go-llama.cpp
LLama.cpp golang bindings (by go-skynet)
llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model. (by Atome-FE)
go-llama.cpp | llama-node | |
---|---|---|
4 | 2 | |
590 | 849 | |
10.3% | 0.9% | |
7.9 | 8.6 | |
8 days ago | 10 months ago | |
C++ | Rust | |
MIT License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
go-llama.cpp
Posts with mentions or reviews of go-llama.cpp.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-19.
- Lokale LLM: Gibt es bereits welche für <= 4 GB vRAM?
-
LocalAI v1.19.0 - CUDA GPU support!
Full CUDA GPU offload support ( PR by mudler. Thanks to chnyda for handing over the GPU access, and lu-zero to help in debugging )
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Go: go-skynet/go-llama.cpp
-
Redirecting Model Outputs from llama.cpp to a TXT File for Easier Tracking of Results?
I've had great success using go-llama.cpp to wrap llama in a much-friendlier language. The install process is a bit clunky- go does not like compiling submodules, so you need to use a replace within the go.mod file to point towards a local copy of go-llama.cpp that you've already compiled manually.
llama-node
Posts with mentions or reviews of llama-node.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-07-15.
-
Tell HN: Rust Is the Superglue
You can practice your Rust skills by writing performant and/or gluey extensions for higher-level language such as NodeJS (checkout napi-rs) and Python or complementing JS in the browser if you target Webassembly.
For instance, checkout Llama-node https://github.com/Atome-FE/llama-node for an involved Rust-based NodeJS extension. Python has PyO3, a Rust-Python extension toolset: https://github.com/PyO3/pyo3.
They can help you leverage your Rust for writing cool new stuff.
-
Could I get a suggestion for a simple HTTP API with no GUI for llama.cpp?
Node.js: hlhr202/llama-node
What are some alternatives?
When comparing go-llama.cpp and llama-node you can also consider the following projects:
llama-cpp-python - Python bindings for llama.cpp
ChainFury - 🦋 Production grade chaining engine behind TuneChat. Self host today!