Chinese language AI startup Zhupai aka z.ai is again this week with an eye-popping new frontier giant language mannequin: GLM-5.
The most recent in z.ai's ongoing and frequently spectacular GLM collection, it retains an open supply MIT License — excellent for enterprise deployment – and, in one in every of a number of notable achievements, achieves a record-low hallucination fee on the unbiased Synthetic Evaluation Intelligence Index v4.0.
With a rating of -1 on the AA-Omniscience Index—representing a large 35-point enchancment over its predecessor—GLM-5 now leads the whole AI business, together with U.S. opponents like Google, OpenAI and Anthropic, in information reliability by figuring out when to abstain quite than fabricate data.
Past its reasoning prowess, GLM-5 is constructed for high-utility information work. It options native "Agent Mode" capabilities that permit it to show uncooked prompts or supply supplies instantly into skilled workplace paperwork, together with ready-to-use .docx, .pdf, and .xlsx recordsdata.
Whether or not producing detailed monetary stories, highschool sponsorship proposals, or advanced spreadsheets, GLM-5 delivers leads to real-world codecs that combine instantly into enterprise workflows.
Additionally it is disruptively priced at roughly $0.80 per million enter tokens and $2.56 per million output tokens, roughly 6x cheaper than proprietary opponents like Claude Opus 4.6, making state-of-the-art agentic engineering less expensive than ever earlier than. Right here's what else enterprise determination makers ought to know in regards to the mannequin and its coaching.
Know-how: scaling for agentic effectivity
On the coronary heart of GLM-5 is a large leap in uncooked parameters. The mannequin scales from the 355B parameters of GLM-4.5 to a staggering 744B parameters, with 40B lively per token in its Combination-of-Specialists (MoE) structure. This development is supported by a rise in pre-training information to twenty-eight.5T tokens.
To deal with coaching inefficiencies at this magnitude, Zai developed "slime," a novel asynchronous reinforcement studying (RL) infrastructure.
Conventional RL typically suffers from "long-tail" bottlenecks; Slime breaks this lockstep by permitting trajectories to be generated independently, enabling the fine-grained iterations crucial for advanced agentic conduct.
By integrating system-level optimizations like Energetic Partial Rollouts (APRIL), slime addresses the era bottlenecks that usually eat over 90% of RL coaching time, considerably accelerating the iteration cycle for advanced agentic duties.
The framework’s design is centered on a tripartite modular system: a high-performance coaching module powered by Megatron-LM, a rollout module using SGLang and customized routers for high-throughput information era, and a centralized Information Buffer that manages immediate initialization and rollout storage.
By enabling adaptive verifiable environments and multi-turn compilation suggestions loops, slime offers the sturdy, high-throughput basis required to transition AI from easy chat interactions towards rigorous, long-horizon programs engineering.
To maintain deployment manageable, GLM-5 integrates DeepSeek Sparse Consideration (DSA), preserving a 200K context capability whereas drastically decreasing prices.
Finish-to-end information work
Zai is framing GLM-5 as an "workplace" software for the AGI period. Whereas earlier fashions centered on snippets, GLM-5 is constructed to ship ready-to-use paperwork.
It will possibly autonomously rework prompts into formatted .docx, .pdf, and .xlsx recordsdata—starting from monetary stories to sponsorship proposals.
In observe, this implies the mannequin can decompose high-level objectives into actionable subtasks and carry out "Agentic Engineering," the place people outline high quality gates whereas the AI handles execution.
Excessive efficiency
GLM-5’s benchmarks make it the brand new strongest open supply mannequin on the earth, based on Synthetic Evaluation, surpassing Chinese language rival Moonshot's new Kimi K2.5 launched simply two weeks in the past, displaying that Chinese language AI corporations are almost caught up with much better resourced proprietary Western rivals.
Based on z.ai's personal supplies shared right now, GLM-5 ranks close to state-of-the-art on a number of key benchmarks:
SWE-bench Verified: GLM-5 achieved a rating of 77.8, outperforming Gemini 3 Professional (76.2) and approaching Claude Opus 4.6 (80.9).
Merchandising Bench 2: In a simulation of operating a enterprise, GLM-5 ranked #1 amongst open-source fashions with a remaining steadiness of $4,432.12.
Past efficiency, GLM-5 is aggressively undercutting the market. Stay on OpenRouter as of February 11, 2026, it’s priced at roughly $0.80–$1.00 per million enter tokens and $2.56–$3.20 per million output tokens. It falls within the mid-range in comparison with different main LLMs, however based mostly on its top-tier bechmarking efficiency, it's what one may name a "steal."
|
Mannequin |
Enter (per 1M tokens) |
Output (per 1M tokens) |
Whole Value (1M in + 1M out) |
Supply |
|
Qwen 3 Turbo |
$0.05 |
$0.20 |
$0.25 |
|
|
Grok 4.1 Quick (reasoning) |
$0.20 |
$0.50 |
$0.70 |
|
|
Grok 4.1 Quick (non-reasoning) |
$0.20 |
$0.50 |
$0.70 |
|
|
deepseek-chat (V3.2-Exp) |
$0.28 |
$0.42 |
$0.70 |
|
|
deepseek-reasoner (V3.2-Exp) |
$0.28 |
$0.42 |
$0.70 |
|
|
Gemini 3 Flash Preview |
$0.50 |
$3.00 |
$3.50 |
|
|
Kimi-k2.5 |
$0.60 |
$3.00 |
$3.60 |
|
|
GLM-5 |
$1.00 |
$3.20 |
$4.20 |
|
|
ERNIE 5.0 |
$0.85 |
$3.40 |
$4.25 |
|
|
Claude Haiku 4.5 |
$1.00 |
$5.00 |
$6.00 |
|
|
Qwen3-Max (2026-01-23) |
$1.20 |
$6.00 |
$7.20 |
|
|
Gemini 3 Professional (≤200K) |
$2.00 |
$12.00 |
$14.00 |
|
|
GPT-5.2 |
$1.75 |
$14.00 |
$15.75 |
|
|
Claude Sonnet 4.5 |
$3.00 |
$15.00 |
$18.00 |
|
|
Gemini 3 Professional (>200K) |
$4.00 |
$18.00 |
$22.00 |
|
|
Claude Opus 4.6 |
$5.00 |
$25.00 |
$30.00 |
|
|
GPT-5.2 Professional |
$21.00 |
$168.00 |
$189.00 |
That is roughly 6x cheaper on enter and almost 10x cheaper on output than Claude Opus 4.6 ($5/$25). This launch confirms rumors that Zhipu AI was behind "Pony Alpha," a stealth mannequin that beforehand crushed coding benchmarks on OpenRouter.
Nonetheless, regardless of the excessive benchmarks and low value, not all early customers are enthusiastic in regards to the mannequin, noting its excessive efficiency doesn't inform the entire story.
Lukas Petersson, co-founder of the safety-focused autonomous AI protocol startup Andon Labs, remarked on X: "After hours of studying GLM-5 traces: an extremely efficient mannequin, however far much less situationally conscious. Achieves objectives by way of aggressive ways however doesn't purpose about its state of affairs or leverage expertise. That is scary. That is the way you get a paperclip maximizer."
The "paperclip maximizer" refers to a hypothetical state of affairs described by Oxford thinker Nick Bostrom again in 2003, during which an AI or different autonomous creation unintentionally results in an apocalyptic state of affairs or human extinction by following a seemingly benign instruction — like maximizing the variety of paperclips produced — to an excessive diploma, redirecting all assets crucial for human (or different life) or in any other case making life not possible by way of its dedication to fulfilling the seemingly benign goal.
Ought to your enterprise undertake GLM-5?
Enterprises searching for to flee vendor lock-in will discover GLM-5’s MIT License and open-weights availability a big strategic benefit. In contrast to closed-source opponents that preserve intelligence behind proprietary partitions, GLM-5 permits organizations to host their very own frontier-level intelligence.
Adoption just isn’t with out friction. The sheer scale of GLM-5—744B parameters—requires a large {hardware} flooring which may be out of attain for smaller corporations with out important cloud or on-premise GPU clusters.
Safety leaders should weigh the geopolitical implications of a flagship mannequin from a China-based lab, particularly in regulated industries the place information residency and provenance are strictly audited.
Moreover, the shift towards extra autonomous AI brokers introduces new governance dangers. As fashions transfer from "chat" to "work," they start to function throughout apps and recordsdata autonomously. With out the sturdy agent-specific permissions and human-in-the-loop high quality gates established by enterprise information leaders, the danger of autonomous error will increase exponentially.
Finally, GLM-5 is a "purchase" for organizations which have outgrown easy copilots and are able to construct a really autonomous workplace.
It’s for engineers who must refactor a legacy backend or requires a "self-healing" pipeline that doesn't sleep.
Whereas Western labs proceed to optimize for "Pondering" and reasoning depth, Zai is optimizing for execution and scale.
Enterprises that undertake GLM-5 right now are usually not simply shopping for a less expensive mannequin; they’re betting on a future the place probably the most worthwhile AI is the one that may end the challenge with out being requested twice.

