Model Cost Profile

Meta: Llama 3.2 1B Instruct

Developer: meta-llama

Pricing updated Mar 11, 2026

Input rank: #34Output rank: #70

Live Pricing

Input: $0.0270

Output: $0.2000

Pricing via OpenRouter API · Last synced Mar 11, 2026 · MMLU score via public benchmark data

Meta: Llama 3.2 1B Instruct is designed for applications requiring extensive context, with a remarkable capacity of 60,000 tokens, making it suitable for complex dialogue systems and large-scale document analysis. Teams leveraging this API model can expect a cost-effective input pricing of $0.03 per million tokens and an output pricing of $0.20 per million tokens, allowing for budget-friendly scalability in data-intensive projects. Its architecture supports diverse use cases, including customer support automation and content generation, providing flexibility for various industries.

Context Window

60,000

Tokens

Input Price / 1M

$0.0270

Prompt tokens

Output Price / 1M

$0.2000

Completion tokens

Intelligence (MMLU)

20.0

Massive Multitask Language Understanding

Benchmark Scores

Standardized evaluation scores for Meta: Llama 3.2 1B Instruct.

BenchmarkScoreRankSource
GPQA19.6#117 of 118artificial_analysis
MMLU20.0#121 of 121artificial_analysis

Price History

Meta: Llama 3.2 1B Instruct Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7Mar 11
$0.0270$0.1135$0.2000Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$0.0270

Current Output / 1M

$0.2000

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for Meta: Llama 3.2 1B Instruct.

How much does Meta: Llama 3.2 1B Instruct cost per 1M input tokens?

Meta: Llama 3.2 1B Instruct input pricing is $0.0270 per 1M tokens based on the latest synced provider data.

How much does Meta: Llama 3.2 1B Instruct cost per 1M output tokens?

Meta: Llama 3.2 1B Instruct output pricing is $0.2000 per 1M tokens based on the latest synced provider data.

What context window does Meta: Llama 3.2 1B Instruct support?

Meta: Llama 3.2 1B Instruct supports a context window of 60,000 tokens.

How can I compare Meta: Llama 3.2 1B Instruct with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.