txtinstruct
AlpacaDataCleaned
txtinstruct | AlpacaDataCleaned | |
---|---|---|
13 | 14 | |
221 | 1,394 | |
2.7% | - | |
5.0 | 7.6 | |
10 months ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
txtinstruct
-
Questions about memory, tree-of-thought, planning
I tried cromadb but had terrible performance and could not pin down the cause (likely a problem on my end). Weaviate was easy to setup and had excellent performance, this is probably what I will use in the future. Next on my list is txtinstruct, to finetune a model with data that does not change and using a vector db for everything else seems promising.
-
[R] Let Language Models be Language Models
The closest thing I've seen to this is txtinstruct
-
Create a ChatGPT-like program using an open source model and custom data.
txtinstruct is a framework for training instruction-tuned models
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Great to see the continued release of open models. The only disappointing thing is that models keep building on CC-BY-NC licensed datasets, which severely limits their use.
Hopefully, people consider txtinstruct (https://github.com/neuml/txtinstruct) and other approaches to generate instruction-tuning datasets without the baggage.
- Build open instruction-tuned datasets and models (r/MachineLearning)
- Build open instruction-tuned datasets and models
- [P] Build open instruction-tuned datasets and models
- Create open instruction-tuned datasets and LLM models
- Show HN: Build open instruction-tuned datasets and models
AlpacaDataCleaned
-
While training LoRA I get 'Failed to read file... JSON parse error'
I tried using the default alpaca_data_cleaned.json training dataset as mentioned here: https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json. Does anyone know why I could be getting this error? The file must be in correct format since it is the default file they have shown in their example.
-
Why run LLMs locally?
This cleaned alpaca dataset gives a good idea of how data is formatted for the standard alpaca json format. Personally, I'd handle making your own datasets by using gpt4 to format the data into a dataset. You can do it by hand or use a llama model, but I've personally just found using chatgpt to be the most efficient way to get the highest possible output. I'm trying to go for quality over quantity.
-
New llama LoRA trained on WizardLM dataset
I created a dataset merge based on the following very high quality datasets:
- [P] Finetuning a commercially viable open source LLM (Flan-UL2) using Alpaca, Dolly15K and LoRA
-
Stability AI Launches the First of Its StableLM Suite of Language Models
That dataset is licensed under CC BY NC 4.0, which is not open. It also has a bunch of garbage in it; see https://github.com/gururise/AlpacaDataCleaned
- Alpacino-13B
-
GPT4-X-Alpaca 30B 4-bit, by MetaIX based on LoRA by chansung
The alpaca cleaned dataset has integrated the Microsoft GPT-4 dataset and cleaned many of the issues.
-
Alpaca, LLaMa, Vicuna [D]
13b Alpaca Cleaned (trained on the cleaned dataset) is very impressive and works well as an instruct model w/o any censorship.
-
Is there a good place to post datasets for the community?
There's already a community maintained Alpaca with cleaned data. https://github.com/gururise/AlpacaDataCleaned And a huge amount of work has already been done.
-
Dirty data sets and LLaMA/ALPACA...
this might be what you're looking for: https://github.com/gururise/AlpacaDataCleaned
What are some alternatives?
StableLM - StableLM: Stability AI Language Models
safetensors - Simple, safe way to store and distribute tensors
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
cataclysm - Cataclysm - Code generation library for the end game
simpleAI - An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.
instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
GPT-4-LLM - Instruction Tuning with GPT-4
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.
LLaMA-LoRA-Tuner - UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.