Context Window
1,048,576
Tokens
Model Cost Profile
Developer: meta-llama
Pricing updated Mar 11, 2026
Meta: Llama 4 Maverick offers a substantial context window of 1,048,576 tokens, making it suitable for applications requiring extensive data processing, such as document summarization and complex conversational agents. With an input price of $0.15 per 1 million tokens and an output price of $0.60 per 1 million tokens, teams can effectively budget for high-volume usage while optimizing their operational costs. This model is ideal for organizations that need to analyze large datasets or generate detailed content without sacrificing performance or incurring excessive expenses.
Context Window
1,048,576
Tokens
Input Price / 1M
$0.1500
Prompt tokens
Output Price / 1M
$0.6000
Completion tokens
Intelligence (MMLU)
Benchmark Pending
Massive Multitask Language Understanding
| Usage Type | Price / 1M Tokens |
|---|---|
| Input (Prompt) | $0.1500 |
| Output (Completion) | $0.6000 |
Price History
Current Input / 1M
$0.1500
Current Output / 1M
$0.6000
Estimate monthly spend for Meta: Llama 4 Maverick based on your workload.
Estimated Monthly Cost
$11
25M input + 12M output tokens
Quick links for cost-down decisions before production rollout.
Common pricing and benchmark questions for Meta: Llama 4 Maverick.
Meta: Llama 4 Maverick input pricing is $0.1500 per 1M tokens based on the latest synced provider data.
Meta: Llama 4 Maverick output pricing is $0.6000 per 1M tokens based on the latest synced provider data.
Meta: Llama 4 Maverick supports a context window of 1,048,576 tokens.
Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.