Model Cost Profile

OpenAI: GPT-4.1

Developer: openai

Pricing updated Mar 11, 2026

Input rank: #282Output rank: #271

Live Pricing

Input: $2.00

Output: $8.00

Pricing via OpenRouter API ยท Last synced Mar 11, 2026

OpenAI's GPT-4.1 model offers a substantial context window of 1,047,576 tokens, making it suitable for applications requiring extensive document analysis, long-form content generation, and complex conversational AI. Teams utilizing this API model should consider the input cost of $2.00 per million tokens and the output cost of $8.00 per million tokens, which can significantly impact budgeting for high-volume projects. This model is ideal for enterprises needing advanced natural language processing capabilities while managing costs effectively for large-scale implementations.

๐Ÿ‘ Vision๐Ÿ”ง Tool Calling๐Ÿ“‹ Structured Output

Context Window

1,047,576

Tokens

Input Price / 1M

$2.00

Prompt tokens

Output Price / 1M

$8.00

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

OpenAI: GPT-4.1 Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7 โ€” Mar 11
$2.00$5.00$8.00Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$2.00

Current Output / 1M

$8.00

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for OpenAI: GPT-4.1.

How much does OpenAI: GPT-4.1 cost per 1M input tokens?

OpenAI: GPT-4.1 input pricing is $2.00 per 1M tokens based on the latest synced provider data.

How much does OpenAI: GPT-4.1 cost per 1M output tokens?

OpenAI: GPT-4.1 output pricing is $8.00 per 1M tokens based on the latest synced provider data.

What context window does OpenAI: GPT-4.1 support?

OpenAI: GPT-4.1 supports a context window of 1,047,576 tokens.

How can I compare OpenAI: GPT-4.1 with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.