LLMs-from-scratch
llm-classifier
LLMs-from-scratch | llm-classifier | |
---|---|---|
9 | 4 | |
16,129 | 183 | |
- | 11.5% | |
9.6 | 7.8 | |
1 day ago | about 2 months ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLMs-from-scratch
- Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes
- Insights from Finetuning LLMs for Classification Tasks
-
Ask HN: Textbook Regarding LLMs
https://www.manning.com/books/build-a-large-language-model-f...
- Comparing 5 ways to implement Multihead Attention in PyTorch
- FLaNK Stack 29 Jan 2024
-
Implementing a ChatGPT-like LLM from scratch, step by step
The attention mechanism we implement in this book* is specific to LLMs in terms of the text inputs, but it's fundamentally the same attention mechanism that is used in vision transformers. The only difference is that in LLMs, you turn text into tokens, and convert these tokens into vector embeddings that go into an LLM. In vision transformers, instead of regarding images as tokens, you use an image patch as a token and turn those into vector embeddings (a bit hard to explain without visuals here). In both text or vision context, it's the same attention mechanism, and it both cases it receives vector embeddings.
(*Chapter 3, already submitted last week and should be online in the MEAP soon, in the meantime the code along with the notes is also available here: https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01...)
llm-classifier
-
Lessons after a Half-billion GPT Tokens
We do this for the null hypothesis - is uses an LLM to bootstrap a binary classifier - which handles null easily
https://github.com/lamini-ai/llm-classifier
- FLaNK Stack 29 Jan 2024
-
Good old-fashioned AI remains viable in spite of the rise of LLMs
LLMs introduced zero-shot learning, or “prompt engineering” which is drastically easier to use and more effective than labeling data.
You can also retrofit “prompt engineering” onto good old fashion ML like text classifiers. I wrote a library to do just that here: https://github.com/lamini-ai/llm-classifier
IMO, it’s a short matter of time before this takes over all of what used to be called “deep learning”.
- How to use a LLM to classify text
What are some alternatives?
s4 - Structured state space sequence models
ml-ferret
reor - Private & local AI personal knowledge management app.
llm-routing-agent - Agent that routes to different tools - LLM classifier SDK
langroid - Harness LLMs with Multi-Agent Programming
heynote - A dedicated scratchpad for developers
java-snapshot-testing - Facebook style snapshot testing for JAVA Tests
Deep_Object_Pose - Deep Object Pose Estimation (DOPE) – ROS inference (CoRL 2018)
harlequin - The SQL IDE for Your Terminal.
pong-wars
async-profiler - Sampling CPU and HEAP profiler for Java featuring AsyncGetCallTrace + perf_events
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.