-
TinyLlama
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
Small models: Less than ~1B parameters. TinyLlama and tinydolphin are examples of small models.
Medium models: Roughly between 1B to 10B parameters. This is where Mistral 7B, Phi-3, Gemma from Google DeepMind, and wizardlm2 sit. Fun fact: GPT 2 was a medium sized model, much smaller than its latest versions.
Medium models: Roughly between 1B to 10B parameters. This is where Mistral 7B, Phi-3, Gemma from Google DeepMind, and wizardlm2 sit. Fun fact: GPT 2 was a medium sized model, much smaller than its latest versions.
Large models: Everything above 10B of parameters. This is where Llama 3, Llama 2, Mistral 8x22B, GPT 3, and most likely GPT 4 sit.
Related posts
-
Ask HN: QR Codes Unsuitable for Storing Gigabytes and Beyond in Graphic Format?
-
Simple Implementation of OpenAI Clip (Tutorial)
-
How to Setup a Minecraft Server on Ubuntu: A Step-by-Step Guide
-
Sam Altman is still trying to return as OpenAI CEO
-
awesome-systematic-trading: NEW Alternative Finance - star count:1478.0