Context Window
60,000
Tokens
Model Cost Profile
Developer: meta-llama
Pricing updated Mar 11, 2026
Live Pricing
Input: $0.0270
Output: $0.2000
Pricing via OpenRouter API · Last synced Mar 11, 2026 · MMLU score via public benchmark data
Meta: Llama 3.2 1B Instruct is designed for applications requiring extensive context, with a remarkable capacity of 60,000 tokens, making it suitable for complex dialogue systems and large-scale document analysis. Teams leveraging this API model can expect a cost-effective input pricing of $0.03 per million tokens and an output pricing of $0.20 per million tokens, allowing for budget-friendly scalability in data-intensive projects. Its architecture supports diverse use cases, including customer support automation and content generation, providing flexibility for various industries.
Context Window
60,000
Tokens
Input Price / 1M
$0.0270
Prompt tokens
Output Price / 1M
$0.2000
Completion tokens
Intelligence (MMLU)
20.0
Massive Multitask Language Understanding
Standardized evaluation scores for Meta: Llama 3.2 1B Instruct.
| Usage Type | Price / 1M Tokens |
|---|---|
| Input (Prompt) | $0.0270 |
| Output (Completion) | $0.2000 |
Price History
Current Input / 1M
$0.0270
Current Output / 1M
$0.2000
Estimate monthly spend for Meta: Llama 3.2 1B Instruct based on your workload.
Estimated Monthly Cost
$3.08
25M input + 12M output tokens
Quick links for cost-down decisions before production rollout.
Common pricing and benchmark questions for Meta: Llama 3.2 1B Instruct.
Meta: Llama 3.2 1B Instruct input pricing is $0.0270 per 1M tokens based on the latest synced provider data.
Meta: Llama 3.2 1B Instruct output pricing is $0.2000 per 1M tokens based on the latest synced provider data.
Meta: Llama 3.2 1B Instruct supports a context window of 60,000 tokens.
Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.