Run LLMs on my own Mac fast and efficient Only 2 MBs

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • llama.cpp

    LLM inference in C/C++

  • I hate this kind of clickbait marketing suggesting the project is delivering 1/100 of the size or 100x-35000x the speed of other solutions because it uses a different language for a wrapper around core library and completely neglecting tooling and community expertise built around other solutions.

    First of all the project is based on llama.cpp[1], which does the heavy work of loading and running multi-GB model files on GPU/CPU and the inference speed is not limited by the wrapper choice (there are other wrappers in Go, Python, Node, Rust, etc. or one can use llama.cpp directly). The size of the binary is also not that important when common quantized model files are often in the range of 5GB-40GB and require a beefy GPU or a MB with 16-64GB of RAM.

    [1] https://github.com/ggerganov/llama.cpp

  • whisper-turbo

    Cross-Platform, GPU Accelerated Whisper 🏎️

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • wasi-nn

    Neural Network proposal for WASI

  • Mmm…

    The wasm-nn that this relies on (https://github.com/WebAssembly/wasi-nn) is a proposal that relies of arbitrary plugin backends sending arbitrarily chunks to some vendor implementation. The api is literally like set input, compute, set output.

    …and that is totally non portable.

    The reason this works, is because it’s relying on the abstraction already implemented in llama.cpp that allows it to take a gguf model and map it to multiple hardware targets,which you can see has been lifted here: https://github.com/WasmEdge/WasmEdge/tree/master/plugins/was...

    So..

    > Developers can refer to this project to write their machine learning application in a high-level language using the bindings, compile it to WebAssembly, and run it with a WebAssembly runtime that supports the wasi-nn proposal, such as WasmEdge.

    Is total rubbish; no, you can’t.

    This isn’t portable.

    It’s not sandboxed.

    If you have a wasm binary you might be able to run it if the version of the runtime you’re using happens to implement the specific ggml backend you need, which it probably doesn’t… because there’s literally no requirement for it to do so.

    There’s a lot of “so portable” talk in this article which really seems misplaced.

  • SSVM

    WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.

  • Mmm…

    The wasm-nn that this relies on (https://github.com/WebAssembly/wasi-nn) is a proposal that relies of arbitrary plugin backends sending arbitrarily chunks to some vendor implementation. The api is literally like set input, compute, set output.

    …and that is totally non portable.

    The reason this works, is because it’s relying on the abstraction already implemented in llama.cpp that allows it to take a gguf model and map it to multiple hardware targets,which you can see has been lifted here: https://github.com/WasmEdge/WasmEdge/tree/master/plugins/was...

    So..

    > Developers can refer to this project to write their machine learning application in a high-level language using the bindings, compile it to WebAssembly, and run it with a WebAssembly runtime that supports the wasi-nn proposal, such as WasmEdge.

    Is total rubbish; no, you can’t.

    This isn’t portable.

    It’s not sandboxed.

    If you have a wasm binary you might be able to run it if the version of the runtime you’re using happens to implement the specific ggml backend you need, which it probably doesn’t… because there’s literally no requirement for it to do so.

    There’s a lot of “so portable” talk in this article which really seems misplaced.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • A WASM runtime for running LLMs locally

    1 project | news.ycombinator.com | 29 Dec 2023
  • Orca-2-13B Runs Directly on Rust+WASM – No Python/C++ Hassles

    1 project | news.ycombinator.com | 26 Nov 2023
  • Security Slam 2023: Contribute to WasmEdge and Elevate Open Source Security

    1 project | news.ycombinator.com | 28 Oct 2023
  • WasmEdge 0.13.0: Unified CLI, ARM Support and Migrating Extensions to Plugins

    1 project | news.ycombinator.com | 2 Jul 2023
  • Release: WasmEdge 0.12 and 0.12.1

    10 projects | news.ycombinator.com | 28 May 2023