Model Cost Profile

LiquidAI: LFM2.5-1.2B-Instruct (free)

Developer: liquid

Pricing updated Mar 11, 2026

Input rank: #8Output rank: #8

Live Pricing

Input: $0.0000

Output: $0.0000

Pricing via OpenRouter API · Last synced Mar 11, 2026

LiquidAI's LFM2.5-1.2B-Instruct model offers a substantial context window of 32,768 tokens, making it ideal for applications requiring detailed and lengthy interactions, such as chatbots and content generation. With a pricing structure of $0.00 for both input and output tokens, this free model is particularly advantageous for teams looking to minimize costs while leveraging advanced AI capabilities. Its versatility allows for integration into various workflows, from customer support automation to educational tools, without incurring any usage fees.

Context Window

32,768

Tokens

Input Price / 1M

$0.0000

Prompt tokens

Output Price / 1M

$0.0000

Completion tokens

Intelligence (MMLU)

Benchmark Pending

Massive Multitask Language Understanding

Price History

LiquidAI: LFM2.5-1.2B-Instruct (free) Pricing Trend

Input / 1M tokensOutput / 1M tokens
Mar 7Mar 11
$0.000000$0.000000$0.000000Mar 7Mar 8Mar 9Mar 10Mar 11

Current Input / 1M

$0.000000

Current Output / 1M

$0.000000

Cheaper Alternatives to Compare

Quick links for cost-down decisions before production rollout.

FAQ

Common pricing and benchmark questions for LiquidAI: LFM2.5-1.2B-Instruct (free).

How much does LiquidAI: LFM2.5-1.2B-Instruct (free) cost per 1M input tokens?

LiquidAI: LFM2.5-1.2B-Instruct (free) input pricing is $0.0000 per 1M tokens based on the latest synced provider data.

How much does LiquidAI: LFM2.5-1.2B-Instruct (free) cost per 1M output tokens?

LiquidAI: LFM2.5-1.2B-Instruct (free) output pricing is $0.0000 per 1M tokens based on the latest synced provider data.

What context window does LiquidAI: LFM2.5-1.2B-Instruct (free) support?

LiquidAI: LFM2.5-1.2B-Instruct (free) supports a context window of 32,768 tokens.

How can I compare LiquidAI: LFM2.5-1.2B-Instruct (free) with cheaper alternatives?

Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.