Model Cost Profile

Arcee AI: Maestro Reasoning

Developer: arcee-ai

Pricing updated Mar 11, 2026

Input rank: #239Output rank: #247

Live Pricing

Input: $0.9000

Output: $3.30

Pricing via OpenRouter API · Last synced Mar 11, 2026

Arcee AI: Maestro Reasoning offers a substantial context window of 131,072 tokens, making it suitable for complex tasks such as legal document analysis and extensive research projects. Teams utilizing this API model can expect input costs of $0.90 per million tokens and output costs of $3.30 per million tokens, which can significantly impact budget planning for large-scale applications. Its advanced reasoning capabilities enable effective decision-making in various industries, including finance, healthcare, and content generation.

Context Window

131,072

Tokens

Input Price / 1M

$0.9000

Prompt tokens

Output Price / 1M

$3.30

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

Arcee AI: Maestro Reasoning Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7Mar 11
$0.9000$2.10$3.30Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$0.9000

Current Output / 1M

$3.30

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for Arcee AI: Maestro Reasoning.

How much does Arcee AI: Maestro Reasoning cost per 1M input tokens?

Arcee AI: Maestro Reasoning input pricing is $0.9000 per 1M tokens based on the latest synced provider data.

How much does Arcee AI: Maestro Reasoning cost per 1M output tokens?

Arcee AI: Maestro Reasoning output pricing is $3.30 per 1M tokens based on the latest synced provider data.

What context window does Arcee AI: Maestro Reasoning support?

Arcee AI: Maestro Reasoning supports a context window of 131,072 tokens.

How can I compare Arcee AI: Maestro Reasoning with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.