CTranslate2
whisper
CTranslate2 | whisper | |
---|---|---|
14 | 345 | |
2,916 | 62,242 | |
4.0% | 3.1% | |
8.7 | 6.0 | |
4 days ago | 5 days ago | |
C++ | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CTranslate2
- Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
Just a point of clarification - faster-whisper references it but ctranslate2[0] is what's really doing the magic here.
Ctranslate2 is a sleeper powerhouse project that enables a lot. They should be up front and center and get the credit they deserve.
[0] - https://github.com/OpenNMT/CTranslate2
-
A Raspberry Pi 5 is better than two Pi 4S
We'd love to move beyond Nvidia.
The issue (among others) is we achieve the speech recognition performance we do largely thanks to ctranslate2[0]. They've gone on the record saying that they essentially have no interest in ROCm[1].
Of course with open source anything is possible but we see this as being one of several fundamental issues in supporting AMD GPGPU hardware.
[0] - https://github.com/OpenNMT/CTranslate2
[1] - https://github.com/OpenNMT/CTranslate2/issues/1072
-
AMD May Get Across the CUDA Moat
> While I agree that it's much more effort to get things working on AMD cards than it is with Nvidia, I was a bit surprised to see this comment mention Whisper being an example of "5-10x as performant".
It easily is. See the benchmarks[0] from faster-whisper which uses Ctranslate2. That's 5x faster than OpenAI reference code on a Tesla V100. Needless to say something like a 4080 easily multiplies that.
> https://www.tomshardware.com/news/whisper-audio-transcriptio... is a good example of Nvidia having no excuses being double the price when it comes to Whisper inference, with 7900XTX being directly comparable with 4080, albeit with higher power draw. To be fair it's not using ROCm but Direct3D 11, but for performance/price arguments sake that detail is not relevant.
With all due respect to the author of the article this is "my first entry into ML" territory. They talk about a 5-10 second delay, my project can do sub 1 second times[1] even with ancient GPUs thanks to Ctranslate2. I don't have an RTX 4080 but if you look at the performance stats for the closest thing (RTX 4090) the performance numbers are positively bonkers - completely untouchable for anything ROCm based. Same goes for the other projects I linked, lmdeploy does over 100 tokens/s in a single session with LLama2 13b on my RTX 4090 and almost 600 tokens/s across eight simultaneous sessions.
> EDIT: Also using CTranslate2 as an example is not great as it's actually a good showcase why ROCm is so far behind CUDA: It's all about adapting the tech and getting the popular libraries to support it. Things usually get implemented in CUDA first and then would need additional effort to add ROCm support that projects with low amount of (possibly hobbyist) maintainers might not have available. There's even an issue in CTranslate2 where they clearly state no-one is working to get ROCm supported in the library. ( https://github.com/OpenNMT/CTranslate2/issues/1072#issuecomm... )
I don't understand what you're saying here. It (along with the other projects I linked) are fantastic examples of just how far behind the ROCm ecosystem is. ROCm isn't even on the radar for most of them as your linked issue highlights.
Things always get implemented in CUDA first (ten years in this space and I've never seen ROCm first) and ROCm users either wait months (minimum) for sub-par performance or never get it at all.
[0] - https://github.com/guillaumekln/faster-whisper#benchmark
[1] - https://heywillow.io/components/willow-inference-server/#ben...
-
StreamingLLM: Efficient streaming technique enable infinite sequence lengths
Etc.
Now, what this allows you to do is reuse the attention computed from the previous turns (since the prefix is the same).
In practice, people often have a system prompt before the conversation history, which (as far a I can tell) makes this technique not applicable (the input prefix will change as soon as the conversation history is long enough that we need to start dropping the oldest turns).
In such case, what you could do is to cache at least the system prompt. This is also possible with https://github.com/OpenNMT/CTranslate2/blob/2203ad5c8baf878a...
-
Faster Whisper Transcription with CTranslate2
The original Whisper implementation from OpenAI uses the PyTorch deep learning framework. On the other hand, faster-whisper is implemented using CTranslate2 [1] which is a custom inference engine for Transformer models. So basically it is running the same model but using another backend, which is specifically optimized for inference workloads.
[1] https://github.com/OpenNMT/CTranslate2
-
Explore large language models on any computer with 512MB of RAM
FLAN-T5 models generally perform well for their size, but they are encode-decoder models, and they aren't as widely supported for efficient inference. I wanted students to be able to run everything locally on CPU, so I was ideally hoping for something that supported quantization for CPU inference. I explored llama.cpp and GGML, but ultimately landed on ctranslate2 for inference.
- CTranslate2: An efficient inference engine for Transformer models
-
[D] Faster Flan-T5 inference
You can also check out the CTranslate2 library which supports efficient inference of T5 models, including 8-bit quantization on CPU and GPU. There is a usage example in the documentation.
- Running large language models like ChatGPT on a single GPU
whisper
- Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
-
Why I Care Deeply About Web Accessibility And You Should Too
Let’s not talk about local models as the hardware requirements are way beyond most of these people’s reach. I have a MacBook Air with an M2 chip and 8GB of RAM and can hardly run Whisper locally, so I use this HuggingFace space.
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
Ask HN: Can AI break a speech audio into individual words?
I found a pretty good discussion in the topic here:
https://github.com/openai/whisper/discussions/1243
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Ask HN: Favorite Podcast Episodes of 2023?
I don't know how OP does it, but here's how I'd do it:
* Generate a transcript by runing Whisper against the podcast audio file: https://github.com/openai/whisper
* Upload transcript to ChatGPT and ask it to summarize.
* Automate all the above.
-
Need advice
Ahh, that makes sense. I've been building something like that, but only from other languages into English using Whisper
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
https://github.com/openai/whisper/discussions/264
What are some alternatives?
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
FlexGen - Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen]
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
OpenNMT-Tutorial - Neural Machine Translation (NMT) tutorial. Data preprocessing, model training, evaluation, and deployment.
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
oneDNN - oneAPI Deep Neural Network Library (oneDNN)
whisper.cpp - Port of OpenAI's Whisper model in C/C++
faster-whisper - Faster Whisper transcription with CTranslate2
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.