mlx VS faster-whisper

Compare mlx vs faster-whisper and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
mlx faster-whisper
23 24
14,956 9,424
4.4% 5.6%
9.8 8.1
5 days ago 7 days ago
C++ Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

mlx

Posts with mentions or reviews of mlx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-28.
  • Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
    11 projects | news.ycombinator.com | 28 Apr 2024
    Yes, we are also looking at integrating MLX [1] which is optimized for Apple Silicon and built by an incredible team of individuals, a few of which were behind the original Torch [2] project. There's also TensorRT-LLM [3] by Nvidia optimized for their recent hardware.

    All of this of course acknowledging that llama.cpp is an incredible project with competitive performance and support for almost any platform.

    [1] https://github.com/ml-explore/mlx

    [2] https://en.wikipedia.org/wiki/Torch_(machine_learning)

    [3] https://github.com/NVIDIA/TensorRT-LLM

  • Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
    11 projects | news.ycombinator.com | 1 Apr 2024
    If you're able to purchase a separate GPU, the most popular option is to get an NVIDIA RTX3090 or RTX4090.

    Apple Mac M2 or M3's are becoming a viable option because of MLX https://github.com/ml-explore/mlx . If you are getting an M series Mac for LLMs, I'd recommend getting something with 24GB or more of RAM.

  • MLX Community Projects
    1 project | news.ycombinator.com | 8 Feb 2024
  • FLaNK 15 Jan 2024
    21 projects | dev.to | 15 Jan 2024
  • Why the M2 is more advanced that it seemed
    5 projects | news.ycombinator.com | 15 Jan 2024
  • I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
    12 projects | news.ycombinator.com | 7 Jan 2024
    3) Not Enough Benefit (For the Cost... Yet!)

    This is my best understanding based on my own work and research for a local LLM iOS app. Read on for more in-depth justifications of each point!

    -—-

    1) No Neural Engine API

    - There is no developer API to use the Neural Engine programmatically, so CoreML is the only way to be able to use it.

    2) CoreML has challenges modeling LLMs efficiently right now.

    - Its most-optimized use cases seem tailored for image models, as it works best with fixed input lengths[1][2], which are fairly limiting for general language modeling (are all prompts, sentences and paragraphs, the same number of tokens? do you want to pad all your inputs?).

    - CoreML features limited support for the leading approaches for compressing LLMs (quantization, whether weights-only or activation-aware). Falcon-7b-instruct (fp32) in CoreML is 27.7GB [3], Llama-2-chat (fp16) is 13.5GB [4] — neither will fit in memory on any currently shipping iPhone. They'd only barely fit on the newest, highest-end iPad Pros.

    - HuggingFace‘s swift-transformers[5] is a CoreML-focused library under active development to eventually help developers with many of these problems, in addition to an `exporters` cli tool[6] that wraps Apple's `coremltools` for converting PyTorch or other models to CoreML.

    3) Not Enough Benefit (For the Cost... Yet!)

    - ANE & GPU (Metal) have access to the same unified memory. They are both subject to the same restrictions on background execution (you simply can't use them in the background, or your app is killed[7]).

    - So the main benefit from unlocking the ANE would be multitasking: running an ML task in parallel with non-ML tasks that might also require the GPU: e.g. SwiftUI Metal Shaders, background audio processing (shoutout Overcast!), screen recording/sharing, etc. Absolutely worthwhile to achieve, but for the significant work required and the lack of ecosystem currently around CoreML for LLMs specifically, the benefits become less clear.

    - Apple's hot new ML library, MLX, only uses Metal for GPU[8], just like Llama.cpp. More nuanced differences arise on closer inspection related to MLX's focus on unified memory optimizations. So perhaps we can squeeze out some performance from unified memory in Llama.cpp, but CoreML will be the only way to unlock ANE, which is lower priority according to lead maintainer Georgi Gerganov as of late this past summer[9], likely for many of the reasons enumerated above.

    I've learned most of this while working on my own private LLM inference app, cnvrs[10] — would love to hear your feedback or thoughts!

    Britt

    ---

    [1] https://github.com/huggingface/exporters/pull/37

    [2] https://apple.github.io/coremltools/docs-guides/source/flexi...

    [3] https://huggingface.co/tiiuae/falcon-7b-instruct/tree/main/c...

    [4] https://huggingface.co/coreml-projects/Llama-2-7b-chat-corem...

    [5] https://github.com/huggingface/swift-transformers

    [6] https://github.com/huggingface/exporters

    [7] https://developer.apple.com/documentation/metal/gpu_devices_...

    [8] https://github.com/ml-explore/mlx/issues/18

    [9] https://github.com/ggerganov/llama.cpp/issues/1714#issuecomm...

    [10] https://testflight.apple.com/join/ERFxInZg

  • Ferret: An End-to-End MLLM by Apple
    5 projects | news.ycombinator.com | 23 Dec 2023
    Maybe MLX is meant to fill this gap?

    https://github.com/ml-explore/mlx

  • PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU [pdf]
    3 projects | news.ycombinator.com | 19 Dec 2023
    This is basically fork of llama.cpp. I created a PR to see the diff and added my comments on it: https://github.com/ggerganov/llama.cpp/pull/4543

    One thing that caught my interest is this line from their readme:

    > PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.

    Apple's Metal/M3 is perfect for this because CPU and GPU share memory. No need to do any data transfers. Checkout mlx from apple: https://github.com/ml-explore/mlx

  • Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
    10 projects | news.ycombinator.com | 13 Dec 2023
    How does this compare to insanely-fast-whisper though? https://github.com/Vaibhavs10/insanely-fast-whisper

    I think that not using optimizations allows this to be a 1:1 comparison, but if the optimizations are not ported to MLX, then it would still be better to use a 4090.

    Having looked at MLX recently, I think it's definitely going to get traction on Macs - and iOS when Swift bindings are released https://github.com/ml-explore/mlx/issues/15 (although there might be some C++20 compilation issue blocking right now).

  • [D] M3 MAX 64GB VS RTX 3080
    1 project | /r/MachineLearning | 8 Dec 2023
    software is already there, check the new ml framework from Apple https://github.com/ml-explore/mlx

faster-whisper

Posts with mentions or reviews of faster-whisper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-29.
  • Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
    7 projects | dev.to | 29 Apr 2024
    Faster-whisper (https://github.com/SYSTRAN/faster-whisper)
  • Using Groq to Build a Real-Time Language Translation App
    3 projects | dev.to | 5 Apr 2024
    For our real-time STT needs, we'll employ a fantastic library called faster-whisper.
  • Apple Explores Home Robotics as Potential 'Next Big Thing'
    3 projects | news.ycombinator.com | 4 Apr 2024
    Thermostats: https://www.sinopetech.com/en/products/thermostat/

    I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?

    TTS: https://github.com/SYSTRAN/faster-whisper

    LLM: https://github.com/Mozilla-Ocho/llamafile/releases

    LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...

    It would take some tweaking to get the voice commands working correctly.

  • Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
    10 projects | news.ycombinator.com | 13 Dec 2023
    Could someone elaborate how is this accomplished and is there any quality disparity compared to original whisper?

    Repos like https://github.com/SYSTRAN/faster-whisper makes immediate sense about why it's faster than the original, but this one, not so much, especially considering it's even much faster.

  • Now I Can Just Print That Video
    5 projects | news.ycombinator.com | 4 Dec 2023
    Cool! I had the same project idea recently. You may be interested in this for the step of speech2text: https://github.com/SYSTRAN/faster-whisper
  • Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
    14 projects | news.ycombinator.com | 31 Oct 2023
    That's the implication. If the distil models are same format as original openai models then the Distil models can be converted for faster-whisper use as per the conversion instructions on https://github.com/guillaumekln/faster-whisper/

    So then we'll see whether we get the 6x model speedup on top of the stated 4x faster-whisper code speedup.

  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    > While I agree that it's much more effort to get things working on AMD cards than it is with Nvidia, I was a bit surprised to see this comment mention Whisper being an example of "5-10x as performant".

    It easily is. See the benchmarks[0] from faster-whisper which uses Ctranslate2. That's 5x faster than OpenAI reference code on a Tesla V100. Needless to say something like a 4080 easily multiplies that.

    > https://www.tomshardware.com/news/whisper-audio-transcriptio... is a good example of Nvidia having no excuses being double the price when it comes to Whisper inference, with 7900XTX being directly comparable with 4080, albeit with higher power draw. To be fair it's not using ROCm but Direct3D 11, but for performance/price arguments sake that detail is not relevant.

    With all due respect to the author of the article this is "my first entry into ML" territory. They talk about a 5-10 second delay, my project can do sub 1 second times[1] even with ancient GPUs thanks to Ctranslate2. I don't have an RTX 4080 but if you look at the performance stats for the closest thing (RTX 4090) the performance numbers are positively bonkers - completely untouchable for anything ROCm based. Same goes for the other projects I linked, lmdeploy does over 100 tokens/s in a single session with LLama2 13b on my RTX 4090 and almost 600 tokens/s across eight simultaneous sessions.

    > EDIT: Also using CTranslate2 as an example is not great as it's actually a good showcase why ROCm is so far behind CUDA: It's all about adapting the tech and getting the popular libraries to support it. Things usually get implemented in CUDA first and then would need additional effort to add ROCm support that projects with low amount of (possibly hobbyist) maintainers might not have available. There's even an issue in CTranslate2 where they clearly state no-one is working to get ROCm supported in the library. ( https://github.com/OpenNMT/CTranslate2/issues/1072#issuecomm... )

    I don't understand what you're saying here. It (along with the other projects I linked) are fantastic examples of just how far behind the ROCm ecosystem is. ROCm isn't even on the radar for most of them as your linked issue highlights.

    Things always get implemented in CUDA first (ten years in this space and I've never seen ROCm first) and ROCm users either wait months (minimum) for sub-par performance or never get it at all.

    [0] - https://github.com/guillaumekln/faster-whisper#benchmark

    [1] - https://heywillow.io/components/willow-inference-server/#ben...

  • Open Source Libraries
    25 projects | /r/AudioAI | 2 Oct 2023
    guillaumekln/faster-whisper
  • Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
    3 projects | news.ycombinator.com | 12 Sep 2023
    Neat to see a new implementation, although I'll note that for those looking for a drop-in replacement for the whisper library, I believe that both faster-whisper https://github.com/guillaumekln/faster-whisper and https://github.com/m-bain/whisperX are easier (PyTorch-based, doesn't require a web browser), and a lot faster (WhisperX is up to 70X realtime).
  • Whisper.api: An open source, self-hosted speech-to-text with fast transcription
    5 projects | news.ycombinator.com | 22 Aug 2023
    One caveat here is that whisper.cpp does not offer any CUDA support at all, acceleration is only available for Apple Silicon.

    If you have Nvidia hardware the ctranslate2 based faster-whisper is very very fast: https://github.com/guillaumekln/faster-whisper

What are some alternatives?

When comparing mlx and faster-whisper you can also consider the following projects:

cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote

whisper.cpp - Port of OpenAI's Whisper model in C/C++

Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.

whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

llama.cpp - LLM inference in C/C++

stable-ts - Transcription, forced alignment, and audio indexing with OpenAI's Whisper

enchanted - Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.

whisper-diarization - Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper

swift-transformers - Swift Package to implement a transformers-like API in Swift

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

mlx-examples - Examples in the MLX framework

whisper-realtime - Whisper runs in realtime on a laptop GPU (8GB)