geov
txtinstruct
geov | txtinstruct | |
---|---|---|
2 | 13 | |
122 | 221 | |
0.0% | 2.7% | |
5.0 | 5.0 | |
about 1 year ago | 9 months ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
geov
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Looks like my edit window closed, but my results ended up being very low so there must be something wrong (I've reached out to StabilityAI just in case). It does however seem to roughly match another user's 3B testing: https://twitter.com/abacaj/status/1648881680835387392
The current scores I have place it between gpt2_774M_q8 and pythia_deduped_410M (yikes!). Based on training and specs you'd expect it to outperform Pythia 6.9B at least... this is running on a HEAD checkout of https://github.com/EleutherAI/lm-evaluation-harness (releases don't support hf-casual) for those looking to replicate/debug.
Note, another LLM currently being trained, GeoV 9B, already far outperforms this model at just 80B tokens trained: https://github.com/geov-ai/geov/blob/master/results.080B.md
- Ask HN: Open source LLM for commercial use?
txtinstruct
-
Questions about memory, tree-of-thought, planning
I tried cromadb but had terrible performance and could not pin down the cause (likely a problem on my end). Weaviate was easy to setup and had excellent performance, this is probably what I will use in the future. Next on my list is txtinstruct, to finetune a model with data that does not change and using a vector db for everything else seems promising.
-
[R] Let Language Models be Language Models
The closest thing I've seen to this is txtinstruct
-
Create a ChatGPT-like program using an open source model and custom data.
txtinstruct is a framework for training instruction-tuned models
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Great to see the continued release of open models. The only disappointing thing is that models keep building on CC-BY-NC licensed datasets, which severely limits their use.
Hopefully, people consider txtinstruct (https://github.com/neuml/txtinstruct) and other approaches to generate instruction-tuning datasets without the baggage.
- Build open instruction-tuned datasets and models (r/MachineLearning)
- Build open instruction-tuned datasets and models
- [P] Build open instruction-tuned datasets and models
- Create open instruction-tuned datasets and LLM models
- Show HN: Build open instruction-tuned datasets and models
What are some alternatives?
instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
StableLM - StableLM: Stability AI Language Models
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
safetensors - Simple, safe way to store and distribute tensors
pythia - The hub for EleutherAI's work on interpretability and learning dynamics
sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
cataclysm - Cataclysm - Code generation library for the end game
llama.cpp - LLM inference in C/C++
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.