tokencost VS litellm

Compare tokencost vs litellm and see what are their differences.

tokencost

Easy token price estimates for 400+ LLMs (by AgentOps-AI)

litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs) (by BerriAI)
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
tokencost litellm
6 30
1,170 10,066
81.8% 7.0%
9.1 10.0
7 days ago 5 days ago
Python Python
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tokencost

Posts with mentions or reviews of tokencost. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-06-23.
  • Why is Everyone into Indie Development? - FAV0 Weekly Issue 004
    5 projects | dev.to | 23 Jun 2024
    Library for Estimating Token Costs
  • Show HN: Token price calculator for 400+ LLMs
    12 projects | news.ycombinator.com | 17 Jun 2024
    I really appreciate your engagement here and think it has great value on a personal level, but the length and claims tend to hide two very obvious, straightforward things:

    1. They only support GPT3.5 and GPT4.0. Note here: [1], and that gpt-4o would get swallowed into gpt-4-0613.

    2. This will lead to massive, significant, embarrassingly large error in calculations. Tokenizers are not mostly the same, within 10% error.

    1. responsive to ex. "It's not just C100K though. It is for a few models [0]",

    The link is to Tiktoken, OpenAI's tokenization library. There are literally more than GPT3.5 and GPT4.0 there, but they're just OpenAI's models, no one else's, none of the others in the long list in their documentation, and certainly not 400.

    every single one of them is for a deprecated model, not served anymore, except c100k and o200k. As described above and shown in [1], their own code kneecaps the o200k and will use c100k"

    2. Let me know what you'd want to see if you're curious the 30%+ error thing. I don't want to go to the trouble to guess at a test suite, then run one, that would make you confident you need to revise a prior that there's only +/- 10% difference between arbitrary tokenizers. I will almost assuredly choose one that isn't comprehensive enough, with your input.

    For context, I run about 20 unit tests, for each of the big 5 providers, with the same prompts, to capture their input and output token counts to make sure I'm billing accurately.

    Just to save you time, you won't be able to talk me down to "eh, good enough!" --- It *matters*, if it didn't, they'd be much more up front about the truth. Every single sign around the library is absolutely damning, and triangulates somewhere between lying and naivete. From the marketing claiming 400+, to the complete lack of note of these extreme* caveats in any documentation, the only thing being what I understand is a warning log.

    [1] https://github.com/AgentOps-AI/tokencost/blob/e1d52dbaa3ada2...

  • Show HN: Easy token counting and price calculation for LLMs
    1 project | news.ycombinator.com | 26 Dec 2023

litellm

Posts with mentions or reviews of litellm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-06-17.

What are some alternatives?

When comparing tokencost and litellm you can also consider the following projects:

openai-messages-token-helper - A utility library for dealing with token counting for messages sent to an LLM (currently OpenAI models only)

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

llm_utils - Utilities for Llama.cpp, Openai, Anthropic, Mistral-rs.

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

anthropic-tokenizer - Approximation of the Claude 3 tokenizer by inspecting generation stream

LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

text-generation-webui - A Gradio web UI for Large Language Models.

libsql - libSQL is a fork of SQLite that is both Open Source, and Open Contributions.

FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...

promptfoo - Test your prompts, agents, and RAGs. Use LLM evals to improve your app's quality and catch problems. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.

ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]

SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured

Did you konow that Python is
the 1st most popular programming language
based on number of metions?