Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Finetune-gpt2xl Alternatives
Similar projects and alternatives to finetune-gpt2xl based on common topics and language
-
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
Extracting-Training-Data-from-Large-Langauge-Models
A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020
-
quickai
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.
finetune-gpt2xl reviews and mentions
-
Fine-tuning?
git clone the finetuning repo https://github.com/Xirider/finetune-gpt2xl go into the finetuning repo, install the rest of the requirements, pip install -r requirements.txt
- Training text-generating models locally
-
Dataset For GPT Fine-Tuning
I would like to understand a little better how to organize texts for Fine-Tuning, especially for GPT Neo. I plan to use this repo procedure, where is the following notice,
-
How to share the finetuned model
In the code suggested in the video (and in the repo) the flag --fp16 is used. But reading the "DeepSpeed Integration" article it is said that,
- [D] I made a script that does all the work to deploy GPT-NEO on Windows 10. (Please Test)
-
[Project] Estimating fine-tuning cost
Finetuning GPT-NEO 2.7B on Wikitext (180mb) took me about 45 minutes on one preemptible V100 instance on google cloud. It cost 1.30$ per hour and therefore around 1 $. Here are the steps: https://github.com/Xirider/finetune-gpt2xl
-
[P] Guide: Finetune GPT2-XL (1.5 Billion Parameters, the biggest model) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed
Here i explain the setup and commands to get it running: https://github.com/Xirider/finetune-gpt2xl
- Guide: Finetune GPT2-XL (1.5 Billion Parameters, the biggest model) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed
-
A note from our sponsor - InfluxDB
www.influxdata.com | 3 Jun 2024
Stats
Xirider/finetune-gpt2xl is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of finetune-gpt2xl is Python.
Sponsored