Model Cost Profile

Mistral Large 2411

Developer: mistralai

Pricing updated Mar 11, 2026

Input rank: #279Output rank: #263

Live Pricing

Input: $2.00

Output: $6.00

Pricing via OpenRouter API ยท Last synced Mar 11, 2026

Mistral Large 2411, developed by mistralai, offers a substantial context window of 131072 tokens, making it suitable for applications requiring extensive text analysis or multi-turn conversations. With an input price of $2.00 per million tokens and an output price of $6.00 per million tokens, teams can effectively budget for high-volume processing tasks such as document summarization or customer support automation. This model's capabilities are particularly beneficial for enterprises needing to manage large datasets or complex interactions while controlling operational costs.

๐Ÿ”ง Tool Calling๐Ÿ“‹ Structured Output

Context Window

131,072

Tokens

Input Price / 1M

$2.00

Prompt tokens

Output Price / 1M

$6.00

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

Mistral Large 2411 Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7 โ€” Mar 11
$2.00$4.00$6.00Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$2.00

Current Output / 1M

$6.00

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for Mistral Large 2411.

How much does Mistral Large 2411 cost per 1M input tokens?

Mistral Large 2411 input pricing is $2.00 per 1M tokens based on the latest synced provider data.

How much does Mistral Large 2411 cost per 1M output tokens?

Mistral Large 2411 output pricing is $6.00 per 1M tokens based on the latest synced provider data.

What context window does Mistral Large 2411 support?

Mistral Large 2411 supports a context window of 131,072 tokens.

How can I compare Mistral Large 2411 with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.