Context Window
65,536
Tokens
Model Cost Profile
Developer: z-ai
Pricing updated Mar 11, 2026
Live Pricing
Input: $0.6000
Output: $1.80
Pricing via OpenRouter API ยท Last synced Mar 11, 2026 ยท MMLU score via public benchmark data
Z.ai: GLM 4.5V, developed by z-ai, offers a substantial context window of 65,536 tokens, making it ideal for applications requiring extensive text analysis, such as legal document review or long-form content generation. Teams leveraging this API model can expect input costs of $0.60 per million tokens and output costs of $1.80 per million tokens, which can significantly impact budgeting for large-scale projects. Its advanced capabilities enable efficient handling of complex queries, enhancing productivity in research-heavy environments.
Context Window
65,536
Tokens
Input Price / 1M
$0.6000
Prompt tokens
Output Price / 1M
$1.80
Completion tokens
Intelligence (MMLU)
75.1
Massive Multitask Language Understanding
Standardized evaluation scores for Z.ai: GLM 4.5V.
| Usage Type | Price / 1M Tokens |
|---|---|
| Input (Prompt) | $0.6000 |
| Output (Completion) | $1.80 |
Price History
Current Input / 1M
$0.6000
Current Output / 1M
$1.80
Estimate monthly spend for Z.ai: GLM 4.5V based on your workload.
Estimated Monthly Cost
$37
25M input + 12M output tokens
Quick links for cost-down decisions before production rollout.
Common pricing and benchmark questions for Z.ai: GLM 4.5V.
Z.ai: GLM 4.5V input pricing is $0.6000 per 1M tokens based on the latest synced provider data.
Z.ai: GLM 4.5V output pricing is $1.80 per 1M tokens based on the latest synced provider data.
Z.ai: GLM 4.5V supports a context window of 65,536 tokens.
Use the comparison links on this page to open direct model-vs-model pricing and benchmark pages, then evaluate monthly spend projections for your workload.