350+ models, one view
Prompt price, completion price, and cost-per-million tokens normalized across all providers.
FinOps Use Case
Live pricing for 350+ models from OpenAI, Anthropic, Google, Meta, Mistral, xAI, and more — updated daily.
The Problem
Provider pricing pages change without notice, use different token conventions, and never show you the quality-adjusted cost. You end up comparing apples to oranges.
Comparing LLM pricing manually is error-prone and time-consuming. TokenPrice.dev aggregates live prompt and completion pricing across all major providers so you can compare on a level playing field — with benchmark quality scores alongside every price.
What You Get
Prompt price, completion price, and cost-per-million tokens normalized across all providers.
Sort by cost-per-quality-point, not just price, so cheaper isn't automatically better.
Price cuts and hikes are tracked in real time. Get notified when your model's price moves.
From any model comparison, launch a full decision brief to quantify what switching would actually cost your workload.
How It Works
Browse the model directory
Filter by provider, benchmark score, price range, or capability (tool calling, vision, context window).
Compare side-by-side
Pick any two models and see detailed pricing, performance, and capability comparison.
Build a decision brief
Apply your workload profile to turn the comparison into an actionable migration decision.
FAQ
Pricing data is synchronized daily from OpenRouter and provider sources. The last sync timestamp is shown on every model page.
Yes. Each model listing includes context window size, tool-calling support, vision capability, and benchmark scores where available.
Use Cases
All FinOps use cases →
Model Data
Explore 350+ models →
Price Changes
Track price volatility →