askai
nanoGPT
askai | nanoGPT | |
---|---|---|
1,756 | 69 | |
86 | 32,197 | |
- | - | |
10.0 | 4.4 | |
over 1 year ago | 11 days ago | |
TypeScript | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
askai
-
Website Optimization Using Strapi, Astro.js and OpenAI
We'll use several interesting technologies to achieve this: Strapi CMS to take care of the content management and backend, Astro which is a great new technology for quickly creating blazing fast frontend apps, and ChatGPT to provide the article summaries.
-
OpenAI Bought Chatgpt.com
I was confused why https://chat.openai.com suddenly redirects to https://chatgpt.com which results in connection refused. Turns out chatgpt.com is in many blocklists (e.g. Pi-Hole) due to it being a potentially unsafe Domain before OpenAI acquiring it. So heads up if you use Pi-Hole / AdGuard etc.!
- The ChatGPT URL have changed
- Chat.openai.com now redirects (me) to chatgpt.com
-
Unofficial ChatGPT API
This API allows you to interact with ChatGPT programmatically, and I've built some cool agents on top of it. Check out the code and let me know what you think! :
ChatGPT unofficial API :
This project is a Node.js application that interacts with the ChatGPT conversational AI model using Puppeteer, a Node.js library for automating web browsers.
Files chatgptv1.js: This file contains the main logic for the ChatGPT bot, including methods for initializing the browser, sending messages, receiving replies, and handling errors.
bart.js: This file contains a function that uses the Cloudflare API to summarize the conversation history when an error occurs, in order to resume the conversation.
twochatbotsconv.js: This file is simple use of the API , which creates two instances of the ChatGPT class, initiates a conversation between them, and saves the conversation history to a file.
.env: This file contains the API token for the Cloudflare API, which is used in the bart.js file.
Dependencies :
puppeteer: A Node.js library for automating web browsers. fs: The built-in file system module in Node.js. winston: A logging library for Node.js. crypto: The built-in cryptography module in Node.js. axios: A popular HTTP client library for Node.js. dotenv: A zero-dependency module that loads environment variables from a .env file.
Usage:
Install the dependencies by running npm install in your project directory. Create a .env file in the project directory and add your Cloudflare API token:
API_TOKEN=YourfreeCloudFlareAPIToken In your code, create a new instance of the ChatGPT class and use the sendMessage and getReply methods to interact with the ChatGPT model:
const ChatGPT = require('./chatgptv1');
const chatgpt = new ChatGPT(); await chatgpt.initializeBrowser();
await chatgpt.sendMessage('Hello, ChatGPT!'); const reply = await chatgpt.getReply(); console.log(reply);
await chatgpt.closeBrowser(); If an error occurs during the conversation, the handleError method will attempt to save the conversation history and resume the conversation using the summarized context.
Before Running :
run Google chrom in the debug mode using 9220 port , run : google-chrome-stable --remote-debugging-port=9222
Customization :
You can customize the behavior of the ChatGPT bot by passing options to the ChatGPT constructor:
chatbotUrl: The URL of the ChatGPT interface (default: 'https://chat.openai.com/'). headless: Whether to run the browser in headless mode (default: false). saveConversationCallback: A callback function that will be called with the conversation summary and the conversation file name when an error occurs.
License:
This project is licensed under the MIT License.
- It's a shame – chat.openai.com redirect to chatgpt.com is broken
-
Building a Basic Forex Rate Assistant Using Agents for Amazon Bedrock
After wrestling with it for a bit and eventually giving up, I instead turned to ChatGPT to see if it is smart enough for the task. With my free plan, I asked ChatGPT 3.5 the following:
- Learn to ask for help
-
How to build a custom GPT: Step-by-step tutorial
Go to chat.openai.com and log in
- Chat.openai.com no longer requires login
nanoGPT
-
Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.
You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.
Seeing this has inspired me further to consider working on that project again at some point.
[0] https://github.com/karpathy/nanoGPT
[1] https://en.wikipedia.org/wiki/Trie
[2] https://en.wikipedia.org/wiki/N-Triples
[3] https://en.wikipedia.org/wiki/Longest_common_subsequence
-
LLMs Learn to Be "Generative"
where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.
I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?
IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?
I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.
-
A simulation of me: fine-tuning an LLM on 240k text messages
This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
-
Ask HN: Is it feasible to train my own LLM?
For training from scratch, maybe a small model like https://github.com/karpathy/nanoGPT or tinyllama. Perhaps with quantization.
-
Writing a C compiler in 500 lines of Python
It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).
[1] https://github.com/karpathy/nanoGPT
-
[D] Can GPT "understand"?
But I'm still not convinced that it can't in theory. Maybe the training set or transformer size I'm using is too small. I'm using nanoGPT implementation (https://github.com/karpathy/nanoGPT) with layers 24, heads 12, and embeddings per head 32. I'm using character-based vocab: every digit is a separate token, +, = and EOL.
-
Transformer Attention is off by one
https://github.com/karpathy/nanoGPT/blob/f08abb45bd2285627d1...
At training time, probabilities for the next token are computed for each position, so if we feed in a sequence of n tokens, we basically get n training examples, one for each position, but at inference time, we only compute the next token since we’ve already output the preceding ones.
-
Sarah Silverman Sues ChatGPT Creator for Copyright Infringement
And there are a bunch of other efforts at making training more efficient. Here's a cool model by Karpathy (OpenAI/used to head up Tesla's efforts): https://github.com/karpathy/nanoGPT
-
Douglas Hofstadter changes his mind on Deep Learning and AI risk
Just being a part of any auto-regressive system does not contradict his statement.
Go look at the GPT training code, here is the exact line: https://github.com/karpathy/nanoGPT/blob/master/train.py#L12...
The model is only trained to predict the next token. The training regime is purely next-token prediction. There is no loopiness whatsoever here, strange or ordinary.
Just because you take that feedforward neural network and wrap it in a loop to feed it its own output does not change the architecture of the neural net itself. The neural network was trained in one direction and runs in one direction. Hofstadter is surprised that such an architecture yields something that looks like intelligence.
He specifically used the correct term "feedforward" to constrast with recurrent neural networks, which GPT is not: https://en.wikipedia.org/wiki/Feedforward_neural_network
-
NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
Does anyone have or know of an example implementation in plain pytorch, not huggingface transformers. Like something you could plug into https://github.com/karpathy/nanoGPT ?
What are some alternatives?
ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
gpt-4chan-model
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
openai-cookbook - Examples and guides for using the OpenAI API
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
ai-cli - Get answers for CLI commands from ChatGPT right from your terminal
KoboldAI-Client
nn-zero-to-hero - Neural Networks: Zero to Hero
civitai - A repository of models, textual inversions, and more
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]