Model Cost Profile

Anthropic: Claude 3.5 Haiku

Developer: anthropic

Pricing updated Mar 10, 2026

Input rank: #236Output rank: #256

Live Pricing

Input: $0.8000

Output: $4.00

Pricing via OpenRouter API ยท Last synced Mar 10, 2026

Anthropic's Claude 3.5 Haiku model offers a substantial context window of 200,000 tokens, making it suitable for applications requiring extensive text analysis, such as legal document review or large-scale content generation. With an input price of $0.80 per million tokens and an output price of $4.00 per million tokens, teams can effectively budget for projects that demand high-volume processing and detailed responses. This pricing structure allows organizations to scale their usage according to specific needs, optimizing costs while leveraging advanced AI capabilities.

๐Ÿ‘ Vision๐Ÿ”ง Tool Calling

Context Window

200,000

Tokens

Input Price / 1M

$0.8000

Prompt tokens

Output Price / 1M

$4.00

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

Anthropic: Claude 3.5 Haiku Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7 โ€” Mar 10
$0.8000$2.40$4.00Mar 7Mar 8Mar 9Mar 10

Current Input / 1M

$0.8000

Current Output / 1M

$4.00

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for Anthropic: Claude 3.5 Haiku.

How much does Anthropic: Claude 3.5 Haiku cost per 1M input tokens?

Anthropic: Claude 3.5 Haiku input pricing is $0.8000 per 1M tokens based on the latest synced provider data.

How much does Anthropic: Claude 3.5 Haiku cost per 1M output tokens?

Anthropic: Claude 3.5 Haiku output pricing is $4.00 per 1M tokens based on the latest synced provider data.

What context window does Anthropic: Claude 3.5 Haiku support?

Anthropic: Claude 3.5 Haiku supports a context window of 200,000 tokens.

How can I compare Anthropic: Claude 3.5 Haiku with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.