larynx
tacotron2
larynx | tacotron2 | |
---|---|---|
18 | 29 | |
788 | 4,937 | |
- | 0.7% | |
0.0 | 0.0 | |
12 months ago | 5 months ago | |
Python | Jupyter Notebook | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
larynx
-
Home Assistant’s Year of the Voice – Chapter 2
The most exciting thing about Home Assistant's "Year of the Voice", for me, is that it is apparently enabling/supporting @synesthesiam's continued phenomenal contributions to the FLOSS off-line voice synthesis space.
The quality, variety & diversity of voices that synesthesiam's "Larynx" TTS project (https://github.com/rhasspy/larynx/) made available, completely transformed the Free/Open Source Text To Speech landscape.
In addition "OpenTTS" (https://github.com/synesthesiam/opentts) provided a common API for interacting with multiple FLOSS TTS projects which showed great promise for actually enabling "standing on the shoulders of" rather than re-inventing the same basic functionality every time.
The new "Piper" TTS project mentioned in the article is the apparent successor to Larynx and, along with the accompanying LibriTTS/LibriVox-based voice models, brings to FLOSS TTS something it's never had before:
* Too many voices! :)
Seriously, the current LibriTTS voice model version has 900+ voices (of varying quality levels), how do you even navigate that many?![0]
And that's not even considering the even higher quality single speaker models based on other audio recording sources.
Offline TTS while immensely valuable for individuals, doesn't seem to be attractive domain for most commercial entities due to lack of lock-in/telemetry opportunities so I was concerned that we might end up missing out on further valuable contributions from synesthesiam's specialised skills & experience due to financial realities & the human need for food. :)
I'm glad we instead get to see what happens next.
[0] See my follow-up comment about this.
-
Text to speech
Larynx!
-
Ask HN: Are there any good open source Text-to-Speech tools?
I've had good results with https://github.com/rhasspy/larynx
-
Recommend a Text to Speech tool ?
Larynx is a really good text-to-speech engine
-
Klipper on android
I was able to install 3.7 following this guide. https://github.com/rhasspy/larynx/issues/9
- I built an audio only Gemini client.
-
NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality
If you've not already encountered them I'd definitely encourage you to check out these Free/Open Source projects too:
* Larynx: https://github.com/rhasspy/larynx/
* OpenTTS: https://github.com/synesthesiam/opentts
* Likely Mimic3 in the near future: https://mycroft.ai/blog/mimic-3-preview/
Larynx in particular has a focus on "faster than real-time" while OpenTTS is an attempt to package & provide common REST API to all Free/Open Source Text To Speech systems so the FLOSS ecosystem can build on previous work supported by short-lived business interests, rather than start from scratch every time.
AIUI the developer of the first two projects now works for Mycroft AI & is involved in the development of Mimic3 which seems very promising given how much of an impact on quality his solo work has had in just the past couple of years or so.
-
Need a recommendation: Self hosted speech to text service
I haven't used it on it's own, but Larynx has worked well for me for Rhasspy
- NATSpeech: High Quality Text-to-Speech Implementation with HuggingFace Demo
- Question: Does anybody know of a working Text to Speech for python on pi?
tacotron2
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).
There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...
- [D] What is the best open source text to speech model?
-
[D] The model used in the AI generated Jay-z vocals
Which might use https://github.com/NVIDIA/tacotron2 in their backend
-
Can anyone reccomend any free voice cloning software/websites even if it provided limited word options
One thing is uberduck.ai but I think it's freemium (it's free but some features are premium). There's also tacotron 2.0 and its pytorch page. Many other softwares on sub but tacotron gave this and this and this.
-
Sauron be spitting bars
Maybe we can use AI to hear this rapped by a famous rapper?
-
Kerfuś
Sadly GothicBot the TTS I knew, doesn't exist anymore, but here is an alternative. It works in polish from what I heard.
-
How far are we from being able to clone a singers voice?
From what I’ve seen, NVIDIA’s Tacotron2 can already be used to create some pretty convincing singing.
-
Is it possible to make compelling synthesized speech with fairly low-quality recordings?
You might want to try something like Tacotron 2 by Nvidia to experiment with your current data.
-
What voice-changing apps are available right now?
We have the TorToiSe repo, the SV2TTS repo, and from here you have the other models like Tacotron 2, FastSpeech 2, and such. A there is a lot that goes into training a baseline for these models on the LJSpeech and LibriTTS datasets. Fine tuning is left up to the user.
- The OG (OC)
What are some alternatives?
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
Voice-Cloning-App - A Python/Pytorch app for easily synthesising human voices
RHVoice - a free and open source speech synthesizer for Russian and other languages
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
TTS - :robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
rhasspy - Offline private voice assistant for many human languages
waveglow - A Flow-based Generative Network for Speech Synthesis