next-token-prediction
dstack
next-token-prediction | dstack | |
---|---|---|
6 | 17 | |
119 | 1,123 | |
- | 6.2% | |
7.9 | 9.8 | |
about 1 month ago | 4 days ago | |
JavaScript | Python | |
- | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
next-token-prediction
-
Ask HN: Who wants to be hired? (May 2024)
Neat project: https://github.com/bennyschmidt/next-token-prediction
-
Ask HN: How does deploying a fine-tuned model work
GPU vs CPU:
It's faster to use a GPU. If you tried to play a game on a laptop with onboard gfx vs buying a good external graphics card, it might technically work, but a good GPU gives you more processing power and VRAM to make it a faster experience.
When is GPU needed:
You need it for both initial training (which it sounds like you've done) and also when someone prompts the LLM and it parses their query (called inference). So to answer your question - your web server that handles LLM queries coming in also needs a great GPU because with any amount of user activity it will be running effectively 24/7 as users are continually prompting it, as they would use any other site you have online.
When is GPU not needed:
Computationally, inference is just "next token prediction", but depending on how the user enters their prompt sometimes it's able to provide those predictions (called completions) with pre-computed embeddings, or in other words by performing a simple lookup, and the GPU is not invoked. For example in this autocompletion/token-prediction library I wrote that uses an ngram language model (https://github.com/bennyschmidt/next-token-prediction), GPU is only needed for initial training on text data, but there's no inference component to it - so completions are fast and don't invoke the GPU, they are effectively lookups. An LM like this could be trained offline and deployed cheaply, no cloud GPU needed. And you will notice that LLMs sometimes will work this way, especially with follow-up prompting once it already has the needed embeddings from the initial prompt - for some responses, an LLM is fast like this.
On-prem:
Beyond the GPU requirement, it's not fundamentally different than any other web server. You can buy/build a gaming PC with a decent GPU, forward ports, get a domain, install a cert, run your model locally, and now you have an LLM server online. If you like Raspberry Pi, you might look into the NVIDIA Jetson Nano (https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...) as it's basically a tiny computer like the Pi but with a GPU and designed for AI. So you can cheaply and easily get an AI/LLM server running out of your apartment.
Cloud & serverless:
Hosting is not very different from conventional web servers except that their hardware has more VRAM and their software is designed for LLM access rather than a web backend (different db technologies, different frameworks/libraries). Of course AWS already has options for deploying your own models and there are a number of tutorials showing how to deploy Ollama on EC2. There's also serverless providers - Replicate, Lightning.AI - these are your Vercels and Herokus that you might pay a little more for but get convenience so you can get up and running quickly.
TLDR: It's like any other web server except you need more GPU/VRAM to do training and inference. Whether you want to run it yourself on-prem, host in the cloud, use a PaaS, etc. those are mostly the same as any other project.
-
Show HN: LLaMA 3 tokenizer runs in the browser
Thanks for clarifying, this is exactly where I was confused.
I just read about how both sentencepiece and tiktoken tokenize.
Thanks for making this (in JavaScript no less!) and putting it online! I'm going to use it in my auto-completion library (here: https://github.com/bennyschmidt/next-token-prediction/blob/m...) instead of just `.split(' ')` as I'm pretty sure it will be more nuanced :)
Awesome work!
-
Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.
"Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.
I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.
I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)
And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:
[nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]
dstack
-
Pyinfra: Automate Infrastructure Using Python
We build a similar tool except we focus on AI workloads. Also support on-prem clusters now in addition to GPU clouds. https://github.com/dstackai/dstack
-
Show HN: Open-source alternative to HashiCorp/IBM Vault
Not exactly this, but something related. At https://github.com/dstackai/dstack, we build an alternative to K8S for AI infra.
-
Ask HN: How does deploying a fine-tuned model work
You can use https://github.com/dstackai/dstack to deploy your model to the most affordable GPU clouds. It supports auto-scaling and other features.
Disclaimer: I’m the creator of dstack.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: I Built an Open Source API with Insanely Fast Whisper and Fly GPUs
Great job on the project! It looks fantastic. Thanks to your post, I discovered Fly's GPUs. We are currently developing a platform called https://github.com/dstackai/dstack that enables users to run any model on any cloud. I am curious if it would be possible to add support for Fly.io as well. If you are interested in collaborating on this, please let me know!
- Show HN: Dstack – an open-source engine for running GPU workloads
-
[P] I built a tool to compare cloud GPUs. How should I improve it?
I also noticed that the creator of this app, dstack, is affiliated with Tensordock, the top results for most if not all queries. If that's the case, perhaps a direct link to the cheapest machine could be provided? I haven't used Tensordock, so I don't know if this is mechanically possible.
-
Running dev environments and ML tasks cost-effectively in any cloud
Here's the repository with all the important links, including documentation, examples, and more: https://github.com/dstackai/dstack
-
Dstack Hub
Hey everyone, I'm happy to release dstack Hub, an open-source tool that helps teams manage their ML workflows more effectively without vendor lock-in.
dstack Hub extend dstack [1] with workflow scheduling capabilities and user management. Here's how it works: run dstack Hub via Docker, use its UI to configure projects and cloud credentials, then pass the URL and personal token to the dstack CLI. Now, you can run workflows through the CLI and Hub will orchestrate them in the cloud on your behalf.
This is a beta release and we plan to continuously improve it. We'd love to hear your feedback and answer any questions!
[1] https://github.com/dstackai/dstack
-
Running Stable Diffusion Locally & in Cloud with Diffusers & dstack
To help you overcome this challenge, we have written an article to guide you through the simple steps of using both diffusers and dstack to generate images from prompts, both locally and in the cloud, using a simple example.
What are some alternatives?
msdocs-python-django-azure-container-apps - Python web app using Django that can be deployed to Azure Container Apps.
dstack-examples - A collection of examples demonstrating how to use dstack
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
lambdapi - Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
openvino-plugins-ai-audacity - A set of AI-enabled effects, generators, and analyzers for Audacity®.
pdfChatGPT - QA with the pdf using gpt-3.5
FairytaleDJ - You got a friend in me
voxscript-demos - Demos in various langauges for Voxscript
prql - PRQL is a modern language for transforming data — a simple, powerful, pipelined SQL replacement
mwmbl - An open source, non-profit search engine implemented in python