LLMs-from-scratch
Deep_Object_Pose
LLMs-from-scratch | Deep_Object_Pose | |
---|---|---|
9 | 3 | |
16,129 | 969 | |
- | 1.7% | |
9.6 | 7.4 | |
1 day ago | 5 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLMs-from-scratch
- Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes
- Insights from Finetuning LLMs for Classification Tasks
-
Ask HN: Textbook Regarding LLMs
https://www.manning.com/books/build-a-large-language-model-f...
- Comparing 5 ways to implement Multihead Attention in PyTorch
- FLaNK Stack 29 Jan 2024
-
Implementing a ChatGPT-like LLM from scratch, step by step
The attention mechanism we implement in this book* is specific to LLMs in terms of the text inputs, but it's fundamentally the same attention mechanism that is used in vision transformers. The only difference is that in LLMs, you turn text into tokens, and convert these tokens into vector embeddings that go into an LLM. In vision transformers, instead of regarding images as tokens, you use an image patch as a token and turn those into vector embeddings (a bit hard to explain without visuals here). In both text or vision context, it's the same attention mechanism, and it both cases it receives vector embeddings.
(*Chapter 3, already submitted last week and should be online in the MEAP soon, in the meantime the code along with the notes is also available here: https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01...)
Deep_Object_Pose
- FLaNK Stack 29 Jan 2024
-
6D object pose estimation by known 3d model
I've been doing some research in this area and there are a few deep learning solutions to this problem. For example, NVIDIA's Deep Object Pose Estimation will estimate the 6DOF pose of a known object. But you'll have to train the network if you want to detect a new object. PoseCNN, which someone else mentioned, does a similar thing. CenterPose is more interesting, as it can estimate then pose of an object from a known category; e.g. sneakers, or laptops, rather that one specific object (as DOPE and PoseCNN do).
-
Machine Learning Workshop tonight 8-9pm hosted by Underwater Robotics!
For our last event of ArchE Week, the Ohio State Underwater Robotics Team (Website, Instagram) is hosting a workshop tonight on machine learning! The workshop is an interactive walkthrough of using machine learning solutions to make predictions. Some example problems we could be trying to solve are predicting a grade, predicting the weather, and the classic recognize a digit problem. Our team personally uses machine learning to do real-time object detection with YOLO and NVidia DOPE, so we may touch on that as well!
What are some alternatives?
s4 - Structured state space sequence models
PoseCNN-PyTorch - PyTorch implementation of the PoseCNN framework
reor - Private & local AI personal knowledge management app.
Hierarchical-Localization - Visual localization made easy with hloc
CenterPose - Single-Stage Keypoint-based Category-level Object Pose Estimation from an RGB Image (ICRA 2022)
iNeRF-public
2021_ML_Workshop - 2021 ML Workshop
java-snapshot-testing - Facebook style snapshot testing for JAVA Tests
pong-wars
llm-classifier - Classify data instantly using an LLM
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
kafkaflow - Apache Kafka .NET Framework to create applications simple to use and extend.