Model Cost Profile

Morph: Morph V3 Large

Developer: morph

Pricing updated Mar 11, 2026

Input rank: #240Output rank: #210

Live Pricing

Input: $0.9000

Output: $1.90

Pricing via OpenRouter API · Last synced Mar 11, 2026

Morph V3 Large, developed by morph, offers an extensive context window of 262,144 tokens, making it ideal for applications requiring deep contextual understanding, such as legal document analysis and long-form content generation. With an input price of $0.90 per million tokens and an output price of $1.90 per million tokens, teams can effectively budget for large-scale projects while managing costs associated with high-volume data processing. This model's capabilities are particularly beneficial for enterprises handling complex datasets or requiring sophisticated conversational AI solutions.

Context Window

262,144

Tokens

Input Price / 1M

$0.9000

Prompt tokens

Output Price / 1M

$1.90

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

Morph: Morph V3 Large Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7Mar 11
$0.9000$1.40$1.90Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$0.9000

Current Output / 1M

$1.90

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for Morph: Morph V3 Large.

How much does Morph: Morph V3 Large cost per 1M input tokens?

Morph: Morph V3 Large input pricing is $0.9000 per 1M tokens based on the latest synced provider data.

How much does Morph: Morph V3 Large cost per 1M output tokens?

Morph: Morph V3 Large output pricing is $1.90 per 1M tokens based on the latest synced provider data.

What context window does Morph: Morph V3 Large support?

Morph: Morph V3 Large supports a context window of 262,144 tokens.

How can I compare Morph: Morph V3 Large with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.