SaaSHub helps you find the best software and product alternatives Learn more →
Tokencost Alternatives
Similar projects and alternatives to tokencost
-
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
-
-
litellm
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
-
openai-messages-token-helper
A utility library for dealing with token counting for messages sent to an LLM (currently OpenAI models only)
-
-
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
tokencost discussion
tokencost reviews and mentions
-
Why is Everyone into Indie Development? - FAV0 Weekly Issue 004
Library for Estimating Token Costs
-
Show HN: Token price calculator for 400+ LLMs
I really appreciate your engagement here and think it has great value on a personal level, but the length and claims tend to hide two very obvious, straightforward things:
1. They only support GPT3.5 and GPT4.0. Note here: [1], and that gpt-4o would get swallowed into gpt-4-0613.
2. This will lead to massive, significant, embarrassingly large error in calculations. Tokenizers are not mostly the same, within 10% error.
1. responsive to ex. "It's not just C100K though. It is for a few models [0]",
The link is to Tiktoken, OpenAI's tokenization library. There are literally more than GPT3.5 and GPT4.0 there, but they're just OpenAI's models, no one else's, none of the others in the long list in their documentation, and certainly not 400.
every single one of them is for a deprecated model, not served anymore, except c100k and o200k. As described above and shown in [1], their own code kneecaps the o200k and will use c100k"
2. Let me know what you'd want to see if you're curious the 30%+ error thing. I don't want to go to the trouble to guess at a test suite, then run one, that would make you confident you need to revise a prior that there's only +/- 10% difference between arbitrary tokenizers. I will almost assuredly choose one that isn't comprehensive enough, with your input.
For context, I run about 20 unit tests, for each of the big 5 providers, with the same prompts, to capture their input and output token counts to make sure I'm billing accurately.
Just to save you time, you won't be able to talk me down to "eh, good enough!" --- It *matters*, if it didn't, they'd be much more up front about the truth. Every single sign around the library is absolutely damning, and triangulates somewhere between lying and naivete. From the marketing claiming 400+, to the complete lack of note of these extreme* caveats in any documentation, the only thing being what I understand is a warning log.
[1] https://github.com/AgentOps-AI/tokencost/blob/e1d52dbaa3ada2...
- Show HN: Easy token counting and price calculation for LLMs
-
A note from our sponsor - SaaSHub
www.saashub.com | 29 Jun 2024
Stats
AgentOps-AI/tokencost is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of tokencost is Python.