Model Cost Profile

LiquidAI: LFM2-24B-A2B

Developer: liquid

Pricing updated Mar 11, 2026

Input rank: #37Output rank: #47

Live Pricing

Input: $0.0300

Output: $0.1200

Pricing via OpenRouter API · Last synced Mar 11, 2026

LiquidAI's LFM2-24B-A2B model offers a substantial context window of 32,768 tokens, making it ideal for applications requiring extensive text analysis, such as legal document review and long-form content generation. With an input price of $0.03 per 1 million tokens and an output price of $0.12 per 1 million tokens, teams can effectively manage their budgets while utilizing this powerful API for complex tasks. This model's architecture supports diverse use cases, including conversational agents and data summarization, providing flexibility for various business needs.

Context Window

32,768

Tokens

Input Price / 1M

$0.0300

Prompt tokens

Output Price / 1M

$0.1200

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

LiquidAI: LFM2-24B-A2B Pricing Trend

Input / 1M tokens0.0%Output / 1M tokens0.0%
Mar 7Mar 11
$0.0300$0.0750$0.1200Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$0.0300

Current Output / 1M

$0.1200

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for LiquidAI: LFM2-24B-A2B.

How much does LiquidAI: LFM2-24B-A2B cost per 1M input tokens?

LiquidAI: LFM2-24B-A2B input pricing is $0.0300 per 1M tokens based on the latest synced provider data.

How much does LiquidAI: LFM2-24B-A2B cost per 1M output tokens?

LiquidAI: LFM2-24B-A2B output pricing is $0.1200 per 1M tokens based on the latest synced provider data.

What context window does LiquidAI: LFM2-24B-A2B support?

LiquidAI: LFM2-24B-A2B supports a context window of 32,768 tokens.

How can I compare LiquidAI: LFM2-24B-A2B with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.