next-token-prediction VS OpenPipe

Compare next-token-prediction vs OpenPipe and see what are their differences.

next-token-prediction

Next-token prediction in JavaScript — build fast language and diffusion models. (by bennyschmidt)

OpenPipe

Turn expensive prompts into cheap fine-tuned models (by OpenPipe)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
next-token-prediction OpenPipe
6 13
120 2,401
- 2.3%
7.9 9.9
about 1 month ago 4 days ago
JavaScript TypeScript
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

next-token-prediction

Posts with mentions or reviews of next-token-prediction. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-01.
  • Ask HN: Who wants to be hired? (May 2024)
    17 projects | news.ycombinator.com | 1 May 2024
    Neat project: https://github.com/bennyschmidt/next-token-prediction
  • Ask HN: How does deploying a fine-tuned model work
    4 projects | news.ycombinator.com | 23 Apr 2024
    GPU vs CPU:

    It's faster to use a GPU. If you tried to play a game on a laptop with onboard gfx vs buying a good external graphics card, it might technically work, but a good GPU gives you more processing power and VRAM to make it a faster experience.

    When is GPU needed:

    You need it for both initial training (which it sounds like you've done) and also when someone prompts the LLM and it parses their query (called inference). So to answer your question - your web server that handles LLM queries coming in also needs a great GPU because with any amount of user activity it will be running effectively 24/7 as users are continually prompting it, as they would use any other site you have online.

    When is GPU not needed:

    Computationally, inference is just "next token prediction", but depending on how the user enters their prompt sometimes it's able to provide those predictions (called completions) with pre-computed embeddings, or in other words by performing a simple lookup, and the GPU is not invoked. For example in this autocompletion/token-prediction library I wrote that uses an ngram language model (https://github.com/bennyschmidt/next-token-prediction), GPU is only needed for initial training on text data, but there's no inference component to it - so completions are fast and don't invoke the GPU, they are effectively lookups. An LM like this could be trained offline and deployed cheaply, no cloud GPU needed. And you will notice that LLMs sometimes will work this way, especially with follow-up prompting once it already has the needed embeddings from the initial prompt - for some responses, an LLM is fast like this.

    On-prem:

    Beyond the GPU requirement, it's not fundamentally different than any other web server. You can buy/build a gaming PC with a decent GPU, forward ports, get a domain, install a cert, run your model locally, and now you have an LLM server online. If you like Raspberry Pi, you might look into the NVIDIA Jetson Nano (https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...) as it's basically a tiny computer like the Pi but with a GPU and designed for AI. So you can cheaply and easily get an AI/LLM server running out of your apartment.

    Cloud & serverless:

    Hosting is not very different from conventional web servers except that their hardware has more VRAM and their software is designed for LLM access rather than a web backend (different db technologies, different frameworks/libraries). Of course AWS already has options for deploying your own models and there are a number of tutorials showing how to deploy Ollama on EC2. There's also serverless providers - Replicate, Lightning.AI - these are your Vercels and Herokus that you might pay a little more for but get convenience so you can get up and running quickly.

    TLDR: It's like any other web server except you need more GPU/VRAM to do training and inference. Whether you want to run it yourself on-prem, host in the cloud, use a PaaS, etc. those are mostly the same as any other project.

  • Show HN: LLaMA 3 tokenizer runs in the browser
    2 projects | news.ycombinator.com | 21 Apr 2024
    Thanks for clarifying, this is exactly where I was confused.

    I just read about how both sentencepiece and tiktoken tokenize.

    Thanks for making this (in JavaScript no less!) and putting it online! I'm going to use it in my auto-completion library (here: https://github.com/bennyschmidt/next-token-prediction/blob/m...) instead of just `.split(' ')` as I'm pretty sure it will be more nuanced :)

    Awesome work!

  • Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
    11 projects | news.ycombinator.com | 10 Apr 2024
    People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.

    "Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.

    I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.

    I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)

    And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:

      [nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]

OpenPipe

Posts with mentions or reviews of OpenPipe. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-23.
  • Ask HN: How does deploying a fine-tuned model work
    4 projects | news.ycombinator.com | 23 Apr 2024
    - Fireworks: $0.20

    If you're looking for an end-to-end flow that will help you gather the training data, validate it, run the fine tune and then define evaluations, you could also check out my company, OpenPipe (https://openpipe.ai/). In addition to hosting your model, we help you organize your training data, relabel if necessary, define evaluations on the finished fine-tune, and monitor its performance in production. Our inference prices are higher than the above providers, but once you're happy with your model you can always export your weights and host them on one of the above!

  • OpenAI: Improvements to the fine-tuning API and expanding our cus
    1 project | news.ycombinator.com | 4 Apr 2024
    Btw, if you've tried fine-tuning OpenAI models before January and came away unimpressed with the quality of the finished model, it's worth trying again. They made some unannounced changes in the last few months that make the fine-tuned models much stronger.

    That said, we've found that Mixtral fine-tunes still typically outperform GPT-3.5 fine tunes, and are far cheaper to serve. It's a bit of a plug, but I honestly think we have the simplest platform to fine-tune multiple models (both API-based like OpenAI as well as open source alternatives) side by side and compare quality. https://openpipe.ai

  • GPT-4, without specialized training, beat a GPT-3.5 class model that cost $10B
    3 projects | news.ycombinator.com | 24 Mar 2024
    IMO it's possible to over-generalize from this datapoint (lol). While it's true that creating a general "finance" model that's stronger than GPT-4 is hard, training a task-specific model is much easier. Eg. "a model that's better than GPT-4 at answering finance-related questions": very hard. "A model that's better than GPT-4 at extracting forward-looking financial projections in a standard format": very easy.

    And in practice, most tasks people are using GPT-4 for in production are more like the latter than the former.

    (Disclaimer: building https://openpipe.ai, which makes it super easy to productize this workflow).

  • Fine Tuning LLMs to Process Massive Amounts of Data 50x Cheaper than GPT-4
    3 projects | dev.to | 8 Jan 2024
    In this article I'll share how I used OpenPipe to effortlessly fine tune Mistral 7B, reducing the cost of one of my prompts by 50x. I included tips and recommendations if you are doing this for the first time, because I definitely left some performance increases on the table. Skip to Fine Tuning Open Recommender if you are specifically interested in what the fine tuning process looks like. You can always DM me on Twitter (@experilearning) or leave a comment if you have questions!
  • OpenAI Switch Kit: Swap OpenAI with any open-source model
    5 projects | news.ycombinator.com | 6 Dec 2023
    The problem is that most non-OpenAI models haven't actually been fine-tuned with function calling in mind, and getting a model to output function-calling-like syntax without having been trained on it is quite unreliable. There are a few alternatives that have been (OpenHermes 2.5 has some function calling in its dataset and does a decent job with it, and the latest Claude does as well), but for now it just doesn't work great.

    That said, it's not that hard to fine-tune a model to understand function calling -- we do that as part of all of our OpenPipe fine tunes, and you can see the serialization method we use here: https://github.com/OpenPipe/OpenPipe/blob/main/app/src/model...

    It isn't particularly difficult, and I'd expect more general-purpose fine-tunes will start doing something similar as they get more mature!

  • OpenAI is too cheap to beat
    4 projects | news.ycombinator.com | 12 Oct 2023
    Eh, OpenAI is too cheap to beat at their own game.

    But there are a ton of use-cases where a 1 to 7B parameter fine-tuned model will be faster, cheaper and easier to deploy than a prompted or fine-tuned GPT-3.5-sized model.

    In fact, it might be a strong statement but I'd argue that most current use-cases for (non-fine-tuned) GPT-3.5 fit in that bucket.

    (Disclaimer: currently building https://openpipe.ai; making it trivial for product engineers to replace OpenAI prompts with their own fine-tuned models.)

  • Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
    8 projects | news.ycombinator.com | 12 Sep 2023
    Yep! The linked notebook includes an example of exactly that (fine-tuning a 7b model to match the syntax of GPT-4 function call responses): https://github.com/OpenPipe/OpenPipe/blob/main/examples/clas...
  • Show HN: Automatically convert your GPT-3.5 prompt to Llama 2
    1 project | news.ycombinator.com | 9 Aug 2023
    Hey HN! I'm working on OpenPipe, an open source prompt workshop. I wanted to share a feature we recently released: prompt translations. Prompt translations allow you to quickly convert a prompt between GPT 3.5, Llama 2, and Claude 1/2 compatible formats. The common case would be if you’re using GPT 3.5 in production and are interested in evaluating a Claude or Llama 2 model for your use case. Here's a screen recording to show how it works in our UI: https://twitter.com/OpenPipeLab/status/1687875354311180288

    We’ve found a lot of our users are interested in evaluating Claude or Llama 2, but weren’t sure what changes they need to make to their prompts to get the best performance out of those models. Prompt translations make that easier.

    A bit more background: OpenPipe is an open-source prompt studio that lets you test your LLM prompts against scenarios from your real workloads. We currently support GPT 3.5/4, Claude 1/2, and Llama 2. The full codebase (including prompt translations) is available at https://github.com/OpenPipe/OpenPipe. If you’d prefer a managed experience, you can also sign up for our hosted version at at https://openpipe.ai/.

    Happy to answer any questions!

  • Join the Prompt Engineering World Championships -- Kickoff August 14, $15,000 prize!
    1 project | /r/ChatGPT | 4 Aug 2023
    Star our Github repo at https://github.com/openpipe/openpipe
  • Patterns for Building LLM-Based Systems and Products
    6 projects | news.ycombinator.com | 1 Aug 2023
    This is fantastic! I found myself nodding along in many places. I've definitely found in practice that evals are critical to shipping LLM-based apps with confidence. I'm actually working on an open-source tool in this space: https://github.com/openpipe/openpipe. Would love any feedback on ways to make it more useful. :)

What are some alternatives?

When comparing next-token-prediction and OpenPipe you can also consider the following projects:

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

agenta - The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.

axolotl - Go ahead and axolotl questions

vllm - A high-throughput and memory-efficient inference and serving engine for LLMs

llama - Inference code for Llama models

Llama-2-Onnx

marsha - Marsha is a functional, higher-level, English-based programming language that gets compiled into tested Python software by an LLM

llama2-chatbot - LLaMA v2 Chatbot

gpt-llm-trainer

cog-llama-template - LLaMA Cog template

llama.cpp - LLM inference in C/C++

ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines