magi_llm_gui
A Qt GUI for large language models (by shinomakoi)
SillyTavern-Extras
Extensions API for SillyTavern. (by SillyTavern)
magi_llm_gui | SillyTavern-Extras | |
---|---|---|
4 | 14 | |
40 | 511 | |
- | 5.9% | |
8.7 | 9.4 | |
7 months ago | 27 days ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
magi_llm_gui
Posts with mentions or reviews of magi_llm_gui.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-07-04.
-
What is the best text web ui currently?
Other than Ooba, this is my fav (and works with a TON of model architectures) -> https://github.com/shinomakoi/magi_llm_gui
- How is ExLlama so good? Can it be used with a more feature rich UI?
-
What's an alternative to oobabooga?
Magi LLM GUI - https://github.com/shinomakoi/magi_llm_gui
- Maji LLM: A Qt Desktop GUI for local language models. Works with Oobabooga's WebUI API and llama.cpp
SillyTavern-Extras
Posts with mentions or reviews of SillyTavern-Extras.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-07.
-
Is possible to run local voice chat agent? If yes what GPU do i Need with 500€ budget?
As for SillyTavern, you need the main SillyTavern frontend and SillyTavern-extras (for TTS, STT, etc.) They're pretty easy to install. SillyTavern connects to oobabooga and SillyTavern-extras via API.
- Image upload in ST
-
Poe Problem
Sure, best bet is to follow the instructions on the ST extra's github. It gives a good step by step guide to setting up and running all the addon's. Here is a link in case you need it. https://github.com/SillyTavern/SillyTavern-extras
-
What is the best text web ui currently?
oobazz + SillyTavern
-
Oogabooga and llama.cpp in longer conversations answers take forever.....
If you want the best roleplaying experience, I can only recommend SillyTavern with SillyTavern/SillyTavern-extras. The extras include summarization and ChromaDB, both helping to get longer and more coherent chats.
-
I finally got SillyTavern set up... now what? How do I set the scene? How do I build the world?
If you haven't already, I sugget you to instal SillyTavern Extra which you can add objective with task, character expression ( personally I generate expression with stable diffusion), and text to speech to diffrent character. You can also set a group scenario if you create group.
-
Looking for the long-term memory extension.
Instead, I use the summarize extension for SillyTavern, which is serviceable: https://github.com/SillyTavern/SillyTavern-extras
-
FINISHED IT!! my final tier list..
It doesn't have to be the end. Do what I'm doing and get SillyTavern, use their extension (for expressive sprites), open an OAI account, create some Umineko bots, import your own sprites, and start chatting with them. There's been a lot of advancements in AI visual novel tech. Here's me and the UI's creator in a 5-way group chat with the Quintessential Quintuplets.
- Local LLMs: After Novelty Wanes
-
New to SillyTavern, I have a few question, sorry if they are silly. Pun intended.
The sillytavern-extras: https://github.com/SillyTavern/SillyTavern-extras Classify-extension handles expression, provided you have the pictures. The clever model mentioned in the readme can handle 28 different expressions. It evaluates *while generating* so it can change the pic of 'current speaker', so it'll change as speakers change. I've not spent much time with the group chat, so I never figured out how to do it like it was meant to. BTW, of extras, chromadb is cool, look into it.
What are some alternatives?
When comparing magi_llm_gui and SillyTavern-Extras you can also consider the following projects:
lollms-webui - Lord of Large Language Models Web User Interface
long_term_memory - A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SillyTavern - LLM Frontend for Power Users.
KoboldAI
stable-diffusion-webui - Stable Diffusion web UI
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
simple-proxy-for-tavern
docker - Docker - the open-source application container engine
magi_llm_gui vs lollms-webui
SillyTavern-Extras vs long_term_memory
magi_llm_gui vs text-generation-webui
SillyTavern-Extras vs SillyTavern
magi_llm_gui vs KoboldAI
SillyTavern-Extras vs stable-diffusion-webui
magi_llm_gui vs exllama
SillyTavern-Extras vs lollms-webui
magi_llm_gui vs simple-proxy-for-tavern
SillyTavern-Extras vs simple-proxy-for-tavern
magi_llm_gui vs SillyTavern
SillyTavern-Extras vs docker