Model Cost Profile

Inception: Mercury

Developer: inception

Pricing updated Mar 11, 2026

Input rank: #154Output rank: #147

Live Pricing

Input: $0.2500

Output: $0.7500

Pricing via OpenRouter API ยท Last synced Mar 11, 2026

Inception: Mercury features an extensive context window of 128,000 tokens, making it ideal for applications requiring in-depth analysis of long documents or complex datasets. Teams utilizing this API model can expect input costs of $0.25 per million tokens and output costs of $1.00 per million tokens, allowing for scalable budgeting based on usage needs. This model is particularly useful for industries such as legal, research, and content creation, where comprehensive understanding and generation of large text volumes are essential.

๐Ÿ”ง Tool Calling๐Ÿ“‹ Structured Output

Context Window

128,000

Tokens

Input Price / 1M

$0.2500

Prompt tokens

Output Price / 1M

$0.7500

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

Inception: Mercury Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7 โ€” Mar 11
$0.2500$0.5000$0.7500Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$0.2500

Current Output / 1M

$0.7500

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for Inception: Mercury.

How much does Inception: Mercury cost per 1M input tokens?

Inception: Mercury input pricing is $0.2500 per 1M tokens based on the latest synced provider data.

How much does Inception: Mercury cost per 1M output tokens?

Inception: Mercury output pricing is $0.7500 per 1M tokens based on the latest synced provider data.

What context window does Inception: Mercury support?

Inception: Mercury supports a context window of 128,000 tokens.

How can I compare Inception: Mercury with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.