mlx
slint
mlx | slint | |
---|---|---|
23 | 138 | |
14,956 | 15,461 | |
4.4% | 2.9% | |
9.8 | 9.9 | |
5 days ago | 5 days ago | |
C++ | Rust | |
MIT License | GNU General Public License v3.0 Or Slint Royalty-Free |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlx
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Yes, we are also looking at integrating MLX [1] which is optimized for Apple Silicon and built by an incredible team of individuals, a few of which were behind the original Torch [2] project. There's also TensorRT-LLM [3] by Nvidia optimized for their recent hardware.
All of this of course acknowledging that llama.cpp is an incredible project with competitive performance and support for almost any platform.
[1] https://github.com/ml-explore/mlx
[2] https://en.wikipedia.org/wiki/Torch_(machine_learning)
[3] https://github.com/NVIDIA/TensorRT-LLM
-
Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
If you're able to purchase a separate GPU, the most popular option is to get an NVIDIA RTX3090 or RTX4090.
Apple Mac M2 or M3's are becoming a viable option because of MLX https://github.com/ml-explore/mlx . If you are getting an M series Mac for LLMs, I'd recommend getting something with 24GB or more of RAM.
- MLX Community Projects
- FLaNK 15 Jan 2024
- Why the M2 is more advanced that it seemed
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
3) Not Enough Benefit (For the Cost... Yet!)
This is my best understanding based on my own work and research for a local LLM iOS app. Read on for more in-depth justifications of each point!
-—-
1) No Neural Engine API
- There is no developer API to use the Neural Engine programmatically, so CoreML is the only way to be able to use it.
2) CoreML has challenges modeling LLMs efficiently right now.
- Its most-optimized use cases seem tailored for image models, as it works best with fixed input lengths[1][2], which are fairly limiting for general language modeling (are all prompts, sentences and paragraphs, the same number of tokens? do you want to pad all your inputs?).
- CoreML features limited support for the leading approaches for compressing LLMs (quantization, whether weights-only or activation-aware). Falcon-7b-instruct (fp32) in CoreML is 27.7GB [3], Llama-2-chat (fp16) is 13.5GB [4] — neither will fit in memory on any currently shipping iPhone. They'd only barely fit on the newest, highest-end iPad Pros.
- HuggingFace‘s swift-transformers[5] is a CoreML-focused library under active development to eventually help developers with many of these problems, in addition to an `exporters` cli tool[6] that wraps Apple's `coremltools` for converting PyTorch or other models to CoreML.
3) Not Enough Benefit (For the Cost... Yet!)
- ANE & GPU (Metal) have access to the same unified memory. They are both subject to the same restrictions on background execution (you simply can't use them in the background, or your app is killed[7]).
- So the main benefit from unlocking the ANE would be multitasking: running an ML task in parallel with non-ML tasks that might also require the GPU: e.g. SwiftUI Metal Shaders, background audio processing (shoutout Overcast!), screen recording/sharing, etc. Absolutely worthwhile to achieve, but for the significant work required and the lack of ecosystem currently around CoreML for LLMs specifically, the benefits become less clear.
- Apple's hot new ML library, MLX, only uses Metal for GPU[8], just like Llama.cpp. More nuanced differences arise on closer inspection related to MLX's focus on unified memory optimizations. So perhaps we can squeeze out some performance from unified memory in Llama.cpp, but CoreML will be the only way to unlock ANE, which is lower priority according to lead maintainer Georgi Gerganov as of late this past summer[9], likely for many of the reasons enumerated above.
I've learned most of this while working on my own private LLM inference app, cnvrs[10] — would love to hear your feedback or thoughts!
Britt
---
[1] https://github.com/huggingface/exporters/pull/37
[2] https://apple.github.io/coremltools/docs-guides/source/flexi...
[3] https://huggingface.co/tiiuae/falcon-7b-instruct/tree/main/c...
[4] https://huggingface.co/coreml-projects/Llama-2-7b-chat-corem...
[5] https://github.com/huggingface/swift-transformers
[6] https://github.com/huggingface/exporters
[7] https://developer.apple.com/documentation/metal/gpu_devices_...
[8] https://github.com/ml-explore/mlx/issues/18
[9] https://github.com/ggerganov/llama.cpp/issues/1714#issuecomm...
[10] https://testflight.apple.com/join/ERFxInZg
-
Ferret: An End-to-End MLLM by Apple
Maybe MLX is meant to fill this gap?
https://github.com/ml-explore/mlx
-
PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU [pdf]
This is basically fork of llama.cpp. I created a PR to see the diff and added my comments on it: https://github.com/ggerganov/llama.cpp/pull/4543
One thing that caught my interest is this line from their readme:
> PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.
Apple's Metal/M3 is perfect for this because CPU and GPU share memory. No need to do any data transfers. Checkout mlx from apple: https://github.com/ml-explore/mlx
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
How does this compare to insanely-fast-whisper though? https://github.com/Vaibhavs10/insanely-fast-whisper
I think that not using optimizations allows this to be a 1:1 comparison, but if the optimizations are not ported to MLX, then it would still be better to use a 4090.
Having looked at MLX recently, I think it's definitely going to get traction on Macs - and iOS when Swift bindings are released https://github.com/ml-explore/mlx/issues/15 (although there might be some C++20 compilation issue blocking right now).
-
[D] M3 MAX 64GB VS RTX 3080
software is already there, check the new ml framework from Apple https://github.com/ml-explore/mlx
slint
-
Ask HN: Why would you ever use C++ for a new project over Rust?
Did you get a chance to check https://slint.dev?
Disclaimer: I work for Slint
-
Deno in 2023
Currently, we do it by using binaries through napi-rs so we can bring in a window using the platform native API. And then we do some hack to merge the event loops.
But if Deno supports bringing up a window directly, this means we can just ship wasm instead of native binary for all platform. And also I hope event loop integration will be simplified.
Although we'd also need more API than just showing a window (mouse and keyboard input, accessibility, popup window, system tray, ...)
[1] https://slint.dev
-
Slint GUI Toolkit
Rich Text content is not yet implemented. This is tracked in https://github.com/slint-ui/slint/issues/2723
Thanks for reporting the broken link. Fixed in https://github.com/slint-ui/slint/commit/9200480b532f49007d2...
-
slint VS rinf - a user suggested alternative
2 projects | 24 Jan 2024
-
A 2024 Plea for Lean Software
With Slint (https://slint.dev) we're trying to make a lightweight toolkit that doesn't use HTML/CSS. And that you can program either from low level languages such as C++ or Rust. As well as with higher level language such as JavaScript, and we want to extend to python too.
-
Immediate Mode GUI Programming
I haven't. I was just searching for a GUI library that was Bevy-compatible and slint isn't at the moment: https://github.com/slint-ui/slint/discussions/940
Sorry!
-
Why the M2 is more advanced that it seemed
Trying to do that with Slint: https://slint.dev
- 9 years of Apple text editor solo dev
-
The Linux graphics stack in a nutshell, part 1
You can do that with Slint (https://slint.dev) and its linuxkms backend. No need for a xorg server or wayland compositor, just run the application made with Slint from the init script.
- Qt 6.6 and 6.7 Make QML Faster Than Ever: A New Benchmark and Analysis
What are some alternatives?
cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote
tauri - Build smaller, faster, and more secure desktop applications with a web frontend.
Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
iced - A cross-platform GUI library for Rust, inspired by Elm
llama.cpp - LLM inference in C/C++
egui - egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native
enchanted - Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.
lvgl - Embedded graphics library to create beautiful UIs for any MCU, MPU and display type.
swift-transformers - Swift Package to implement a transformers-like API in Swift
dioxus - Fullstack GUI library for web, desktop, mobile, and more.
mlx-examples - Examples in the MLX framework
cxx-qt - Safe interop between Rust and Qt