pythia
geov
pythia | geov | |
---|---|---|
7 | 2 | |
2,120 | 122 | |
1.7% | 0.0% | |
7.8 | 5.0 | |
13 days ago | about 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pythia
-
If you can't reproduce the model then it's not open-source
You can grep for bad words. What you can't do(unless hoops are jumped through) is to verify that weights came from the same dataset. You can set the same random seed and still get different results. Calculations are not that deterministic. (https://pytorch.org/docs/stable/notes/randomness.html#reprod...).
>I am overall skeptical that this is true in the case of LLMs
This skepticism seems reasonable. EleutherAI have documentation to reproduce training (https://github.com/EleutherAI/pythia#reproducing-training). So far I haven't seen it leading to anything.
-
Local Alternatives of ChatGPT and Midjourney
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
- Ask HN: Open source LLM for commercial use?
-
A New AI Research Proposes Pythia: A Suite of Decoder-Only Autoregressive Language Models Ranging from 70M to 12B Parameters
Github: https://github.com/EleutherAI/pythia
- Pythia: Interpreting Autoregressive Transformers Across Time and Scale
- AI computing startup Cerebras releases open source ChatGPT-like models
-
Is there a way to easily train ChatGPT or GPT on custom knowledge?
Pythia is another smaller option that seems to have pretty good performance. As well as FLAN. Both of those are okay for commercial use AFAIK (though double check for yourself).
geov
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Looks like my edit window closed, but my results ended up being very low so there must be something wrong (I've reached out to StabilityAI just in case). It does however seem to roughly match another user's 3B testing: https://twitter.com/abacaj/status/1648881680835387392
The current scores I have place it between gpt2_774M_q8 and pythia_deduped_410M (yikes!). Based on training and specs you'd expect it to outperform Pythia 6.9B at least... this is running on a HEAD checkout of https://github.com/EleutherAI/lm-evaluation-harness (releases don't support hf-casual) for those looking to replicate/debug.
Note, another LLM currently being trained, GeoV 9B, already far outperforms this model at just 80B tokens trained: https://github.com/geov-ai/geov/blob/master/results.080B.md
- Ask HN: Open source LLM for commercial use?
What are some alternatives?
lollms-webui - Lord of Large Language Models Web User Interface
txtinstruct - 📚 Datasets and models for instruction-tuning
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
StableLM - StableLM: Stability AI Language Models
stable-diffusion-webui - Stable Diffusion web UI
llama.cpp - LLM inference in C/C++