Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 23 Jupyter Notebook Transformer Projects
-
nn
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
pytorch-seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
-
tsai
Time series Timeseries Deep Learning Machine Learning Python Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai
-
kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
deepsvg
[NeurIPS 2020] Official code for the paper "DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation". Includes a PyTorch library for deep learning with SVG data.
-
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
-
maxvit
[ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmentation, image quality, and generative modeling...
-
relora
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
-
TokenCut
(CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"
-
vid2cleantxt
Python API & command-line tool to easily transcribe speech-based video files into clean text
-
tf-transformers
State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).
-
gpt-j-fine-tuning-example
Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Aeon: A unified framework for machine learning with time series | news.ycombinator.com | 2023-06-22Also https://github.com/timeseriesAI/tsai
I have seen models which do something similar but the questions they ask are not in a Yes/No style such as this T5 - based Question Generator. Essentially, I was wondering how I would go about developing such a model.
I'd like to start experimenting with image classification in SVG format. I've found the deepsvg library that seems to have a solid basis on how to handle SVG files. Unfortunately it doesn't seem there's a large body of resesarch in the area and I am struggling to find a MNIST-like datset in SVG format.
This paper [1] does atempt that and reports similar performance compared to conventional pre-training. However, they do start off by doing a normal full-rank training and claim that it is needed to 'warm start' the training process.
[1] https://arxiv.org/abs/2307.05695
Jupyter Notebook Transformer related posts
-
Looking for Paper about LLM Fine Tuning for specific topic / Alignment Paper
-
Magyar Youtube feliratozo v.1.1 (ingyenes colab)
-
Telex videón magyar és angol felirat demó (whisper-ctranslate2)
-
Text-to-Audio Generation Using Instruction Tuned LLM and Latent Diffusion Model
-
Best solution to make static chatbot for a minecraft server?
-
We’ve been forever waiting for the rest of the season, my friend doesn’t hear very well, any idea where I can get part 2 subtitles? (I hope it’s ok that I’m posting this here)
-
[P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
-
A note from our sponsor - InfluxDB
www.influxdata.com | 19 May 2024
Index
What are some of the best open-source Transformer projects in Jupyter Notebook? This list will help you:
Project | Stars | |
---|---|---|
1 | nn | 48,933 |
2 | nlp-tutorial | 13,756 |
3 | pytorch-seq2seq | 5,178 |
4 | tsai | 4,760 |
5 | ru-dalle | 1,639 |
6 | kernl | 1,468 |
7 | OneFormer | 1,345 |
8 | poolformer | 1,226 |
9 | question_generation | 1,073 |
10 | deepsvg | 890 |
11 | Transformer-MM-Explainability | 715 |
12 | maxvit | 421 |
13 | relora | 401 |
14 | whisper-youtube | 323 |
15 | SpecVQGAN | 319 |
16 | TokenCut | 285 |
17 | BMT | 220 |
18 | mgpt | 195 |
19 | nested-transformer | 190 |
20 | vid2cleantxt | 156 |
21 | tf-transformers | 84 |
22 | gpt-j-fine-tuning-example | 63 |
23 | Transformer-Models-from-Scratch | 62 |
Sponsored