Hello Folks,
Welcome and thank you for subscribing to the Price Per Token newsletter! My name is Alex and I created the Price Per Token website.
Here, I’ll be sending updates on the latest and greatest LLM APIs and Price Per Token’s sites and services.
This brings us to our first announcement!
After launching the site I had a few requests come in like this:
The goal of the API would be to address the above concern: picking the most efficient model for tasks within your applications.
Arvid went on to articulate a specific example of what this could look like:
@BlakeFolgado yeah, or something like "I have ~54000 tokens worth of a prompt on average, I want reasoning and average-to-high quality and expect up to 5000 token per response, what would that cost when run for 24h", encoded as a request object of some sort :)
— Arvid Kahl (@arvidkahl)
3:02 PM • Jul 25, 2025
This made a lot of sense to me and I’ve begun to plan out what the API will look like.
If you are interested in being a beta tester, please reply to this email or feel free to contact me on X!
The link to the post above is here, feel free to join the conversation
Looking forward to hearing from you,
Alex