mlx VS text-generation-webui

Compare mlx vs text-generation-webui and see what are their differences.

mlx

MLX: An array framework for Apple silicon (by ml-explore)

text-generation-webui

A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models. (by oobabooga)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
mlx text-generation-webui
23 877
14,956 37,401
9.8% -
9.8 9.9
3 days ago 1 day ago
C++ Python
MIT License GNU Affero General Public License v3.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mlx

Posts with mentions or reviews of mlx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-28.
  • Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
    11 projects | news.ycombinator.com | 28 Apr 2024
    Yes, we are also looking at integrating MLX [1] which is optimized for Apple Silicon and built by an incredible team of individuals, a few of which were behind the original Torch [2] project. There's also TensorRT-LLM [3] by Nvidia optimized for their recent hardware.

    All of this of course acknowledging that llama.cpp is an incredible project with competitive performance and support for almost any platform.

    [1] https://github.com/ml-explore/mlx

    [2] https://en.wikipedia.org/wiki/Torch_(machine_learning)

    [3] https://github.com/NVIDIA/TensorRT-LLM

  • Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
    11 projects | news.ycombinator.com | 1 Apr 2024
    If you're able to purchase a separate GPU, the most popular option is to get an NVIDIA RTX3090 or RTX4090.

    Apple Mac M2 or M3's are becoming a viable option because of MLX https://github.com/ml-explore/mlx . If you are getting an M series Mac for LLMs, I'd recommend getting something with 24GB or more of RAM.

  • MLX Community Projects
    1 project | news.ycombinator.com | 8 Feb 2024
  • FLaNK 15 Jan 2024
    21 projects | dev.to | 15 Jan 2024
  • Why the M2 is more advanced that it seemed
    5 projects | news.ycombinator.com | 15 Jan 2024
  • I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
    12 projects | news.ycombinator.com | 7 Jan 2024
    3) Not Enough Benefit (For the Cost... Yet!)

    This is my best understanding based on my own work and research for a local LLM iOS app. Read on for more in-depth justifications of each point!

    -—-

    1) No Neural Engine API

    - There is no developer API to use the Neural Engine programmatically, so CoreML is the only way to be able to use it.

    2) CoreML has challenges modeling LLMs efficiently right now.

    - Its most-optimized use cases seem tailored for image models, as it works best with fixed input lengths[1][2], which are fairly limiting for general language modeling (are all prompts, sentences and paragraphs, the same number of tokens? do you want to pad all your inputs?).

    - CoreML features limited support for the leading approaches for compressing LLMs (quantization, whether weights-only or activation-aware). Falcon-7b-instruct (fp32) in CoreML is 27.7GB [3], Llama-2-chat (fp16) is 13.5GB [4] — neither will fit in memory on any currently shipping iPhone. They'd only barely fit on the newest, highest-end iPad Pros.

    - HuggingFace‘s swift-transformers[5] is a CoreML-focused library under active development to eventually help developers with many of these problems, in addition to an `exporters` cli tool[6] that wraps Apple's `coremltools` for converting PyTorch or other models to CoreML.

    3) Not Enough Benefit (For the Cost... Yet!)

    - ANE & GPU (Metal) have access to the same unified memory. They are both subject to the same restrictions on background execution (you simply can't use them in the background, or your app is killed[7]).

    - So the main benefit from unlocking the ANE would be multitasking: running an ML task in parallel with non-ML tasks that might also require the GPU: e.g. SwiftUI Metal Shaders, background audio processing (shoutout Overcast!), screen recording/sharing, etc. Absolutely worthwhile to achieve, but for the significant work required and the lack of ecosystem currently around CoreML for LLMs specifically, the benefits become less clear.

    - Apple's hot new ML library, MLX, only uses Metal for GPU[8], just like Llama.cpp. More nuanced differences arise on closer inspection related to MLX's focus on unified memory optimizations. So perhaps we can squeeze out some performance from unified memory in Llama.cpp, but CoreML will be the only way to unlock ANE, which is lower priority according to lead maintainer Georgi Gerganov as of late this past summer[9], likely for many of the reasons enumerated above.

    I've learned most of this while working on my own private LLM inference app, cnvrs[10] — would love to hear your feedback or thoughts!

    Britt

    ---

    [1] https://github.com/huggingface/exporters/pull/37

    [2] https://apple.github.io/coremltools/docs-guides/source/flexi...

    [3] https://huggingface.co/tiiuae/falcon-7b-instruct/tree/main/c...

    [4] https://huggingface.co/coreml-projects/Llama-2-7b-chat-corem...

    [5] https://github.com/huggingface/swift-transformers

    [6] https://github.com/huggingface/exporters

    [7] https://developer.apple.com/documentation/metal/gpu_devices_...

    [8] https://github.com/ml-explore/mlx/issues/18

    [9] https://github.com/ggerganov/llama.cpp/issues/1714#issuecomm...

    [10] https://testflight.apple.com/join/ERFxInZg

  • Ferret: An End-to-End MLLM by Apple
    5 projects | news.ycombinator.com | 23 Dec 2023
    Maybe MLX is meant to fill this gap?

    https://github.com/ml-explore/mlx

  • PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU [pdf]
    3 projects | news.ycombinator.com | 19 Dec 2023
    This is basically fork of llama.cpp. I created a PR to see the diff and added my comments on it: https://github.com/ggerganov/llama.cpp/pull/4543

    One thing that caught my interest is this line from their readme:

    > PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.

    Apple's Metal/M3 is perfect for this because CPU and GPU share memory. No need to do any data transfers. Checkout mlx from apple: https://github.com/ml-explore/mlx

  • Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
    10 projects | news.ycombinator.com | 13 Dec 2023
    How does this compare to insanely-fast-whisper though? https://github.com/Vaibhavs10/insanely-fast-whisper

    I think that not using optimizations allows this to be a 1:1 comparison, but if the optimizations are not ported to MLX, then it would still be better to use a 4090.

    Having looked at MLX recently, I think it's definitely going to get traction on Macs - and iOS when Swift bindings are released https://github.com/ml-explore/mlx/issues/15 (although there might be some C++20 compilation issue blocking right now).

  • [D] M3 MAX 64GB VS RTX 3080
    1 project | /r/MachineLearning | 8 Dec 2023
    software is already there, check the new ml framework from Apple https://github.com/ml-explore/mlx

text-generation-webui

Posts with mentions or reviews of text-generation-webui. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-01.
  • Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
    11 projects | news.ycombinator.com | 1 Apr 2024
    Some of the tools offer a path to doing tool use (fetching URLs and doing things with them) or RAG (searching your documents). I think Oobabooga https://github.com/oobabooga/text-generation-webui offers the latter through plugins.

    Our tool, https://github.com/transformerlab/transformerlab-app also supports the latter (document search) using local llms.

  • Ask HN: How to get started with local language models?
    6 projects | news.ycombinator.com | 17 Mar 2024
    You can use webui https://github.com/oobabooga/text-generation-webui

    Once you get a version up and running I make a copy before I update it as several times updates have broken my working version and caused headaches.

    a decent explanation of parameters outside of reading archive papers: https://github.com/oobabooga/text-generation-webui/wiki/03-%...

    a news ai website:

  • text-generation-webui VS LibreChat - a user suggested alternative
    2 projects | 29 Feb 2024
  • Show HN: I made an app to use local AI as daily driver
    31 projects | news.ycombinator.com | 27 Feb 2024
  • Ask HN: People who switched from GPT to their own models. How was it?
    3 projects | news.ycombinator.com | 26 Feb 2024
    The other answers are recommending paths which give you #1. less control and #2. projects with smaller eco-systems.

    If you want a truly general purpose front-end for LLMs, the only good solution right now is oobabooga: https://github.com/oobabooga/text-generation-webui

    All other alternatives have only small fractions of the features that oobabooga supports. All other alternatives only support a fraction of the LLM backends that oobabooga supports, etc.

  • AI Girlfriend Is a Data-Harvesting Horror Show
    1 project | news.ycombinator.com | 14 Feb 2024
    The example waifu in text-generation-webui is good enough for me.

    https://github.com/oobabooga/text-generation-webui/blob/main...

  • Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
    7 projects | news.ycombinator.com | 13 Feb 2024
    > Downloading text-generation-webui takes a minute, let's you use any model and get going.

    What you're missing here is you're already in this area deep enough to know what ooogoababagababa text-generation-webui is. Let's back out to the "average Windows desktop user" level. Assuming they even know how to find it:

    1) Go to https://github.com/oobabooga/text-generation-webui?tab=readm...

    2) See a bunch of instructions opening a terminal window and running random batch/powershell scripts. Powershell, etc will likely prompt you with a scary warning. Then you start wondering who ooobabagagagaba is...

    3) Assuming you get this far (many users won't even get to step 1) you're greeted with a web interface[0] FILLED to the brim with technical jargon and extremely overwhelming options just to get a model loaded, which is another mind warp because you get to try to select between a bunch of random models with no clear meaning and non-sensical/joke sounding names from someone called "TheBloke". Ok...

    Let's say you somehow braved this gauntlet and get this far now you get to chat with it. Ok, what about my local documents? text-generation-webui itself has nothing for that. Repeat this process over the 10 random open source projects from a bunch of names you've never heard of in an attempt to accomplish that.

    This is "I saw this thing from Nvidia explode all over media, twitter, youtube, etc. I downloaded it from Nvidia, double-clicked, pointed it at a folder with documents, and it works".

    That's the difference and it's very significant.

    [0] - https://raw.githubusercontent.com/oobabooga/screenshots/main...

  • Ask HN: What are your top 3 coolest software engineering tools?
    1 project | news.ycombinator.com | 6 Feb 2024
    Maybe a copout answer, but setting up a local LLM on my development machine has been invaluable. I use Deep Seek Coder 6.7 [0] and Oobabooga's UI [1]. It helps me solve simple problems and find bugs, while still leaving the larger architecture decisions to me.

    [0] https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instr...

    [1] https://github.com/oobabooga/text-generation-webui

  • Meta AI releases Code Llama 70B
    6 projects | news.ycombinator.com | 29 Jan 2024
    You can download it and run it with [this](https://github.com/oobabooga/text-generation-webui). There's an API mode that you could leverage from your VS Code extension.
  • Ollama Python and JavaScript Libraries
    17 projects | news.ycombinator.com | 24 Jan 2024
    Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).

    For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]

    [1] https://github.com/oobabooga/text-generation-webui/issues/53...

    [2] https://github.com/langroid/langroid/blob/main/langroid/lang...

    Related question - I assume ollama auto detects and applies the right chat formatting template for a model?

What are some alternatives?

When comparing mlx and text-generation-webui you can also consider the following projects:

cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote

KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!

Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.

llama.cpp - LLM inference in C/C++

gpt4all - gpt4all: run open-source LLMs anywhere

enchanted - Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.

TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)

swift-transformers - Swift Package to implement a transformers-like API in Swift

KoboldAI-Client

mlx-examples - Examples in the MLX framework

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.