Chinese language electronics and automotive producer Xiaomi stunned the worldwide AI neighborhood immediately with the discharge of MiMo-V2-Professional, a brand new 1-trillion parameter basis mannequin with benchmarks approaching these of U.S. AI giants OpenAI and Anthropic, however at round a seventh or sixth the associated fee when accessed over proprietary API — and importantly, sending lower than 256,000 tokens-worth of knowledge backwards and forwards.
Led by Fuli Luo, a veteran of the disruptive DeepSeek R1 venture, the discharge represents what Luo characterizes as a "quiet ambush" on the worldwide frontier. Moreover, Luo acknowledged in an X publish that the corporate does plan to open supply a mannequin variant from this newest launch, " when the fashions are secure sufficient to deserve it."
By specializing in the "motion house" of intelligence—shifting from code era to the autonomous operation of digital "claws"—Xiaomi is making an attempt to leapfrog the conversational paradigm completely.
Previous to this foray into frontier AI, Beijing-based Xiaomi established itself as a titan of "The Web of Issues" and shopper {hardware}.
Globally acknowledged because the world’s third-largest smartphone producer, Xiaomi spent the early 2020s executing a high-stakes entry into the automotive sector. Its electrical automobiles (EVs), such because the SU7 and the not too long ago launched YU7 SUV, have turned the corporate right into a vertically built-in powerhouse able to merging {hardware}, software program, and now, superior reasoning.
This pedigree in physical-world engineering informs MiMo-V2-Professional’s structure; it’s constructed to be the "mind" of complicated techniques, whether or not these techniques are managing international provide chains or navigating the intricate scaffolds of an autonomous coding agent.
Expertise: The structure of company
The central problem of the "Agent Period" is sustaining high-fidelity reasoning over large spans of knowledge with out incurring a prohibitive "intelligence tax" in latency or value. MiMo-V2-Professional addresses this via a sparse structure: whereas it homes 1T whole parameters, solely 42B are energetic throughout any single ahead go, making it roughly thrice the scale of its predecessor, MiMo-V2-Flash.
The mannequin’s effectivity is rooted in an developed Hybrid Consideration mechanism. Commonplace transformers usually face a quadratic improve in compute necessities as context grows; MiMo-V2-Professional makes use of a 7:1 hybrid ratio (elevated from 5:1 within the Flash model) to handle its large 1M-token context window. This architectural selection permits the mannequin to take care of a deep "reminiscence" of long-running duties with out the efficiency degradation often seen in frontier fashions.
The analogy: Consider the mannequin not as a pupil studying a e book page-by-page, however as an skilled researcher in an unlimited library. The 7:1 ratio permits the mannequin to "skim" 85% of the info for context whereas making use of high-density consideration to the 15% most related to the duty at hand.
That is paired with a light-weight Multi-Token Prediction (MTP) layer, which permits the mannequin to anticipate and generate a number of tokens concurrently, drastically decreasing the latency required for the "considering" phases of agentic workflows. Based on Luo, these structural selections had been made months upfront, particularly to supply a "structural benefit" for the surprising pace at which the trade shifted towards brokers.
Product and benchmarking: A 3rd-party actuality examine
Xiaomi’s inside knowledge paints an image of a mannequin that excels in "real-world" duties over artificial benchmarks. On GDPval-AA, a benchmark measuring efficiency on agentic real-world work duties, MiMo-V2-Professional achieved an Elo of 1426, putting it forward of main Chinese language friends like GLM-5 (1406) and Kimi K2.5 (1283).
Whereas it nonetheless trails Western "max effort" fashions like Claude Sonnet 4.6 (1633) in uncooked Elo, it represents the best recorded efficiency for a Chinese language-origin mannequin on this class.
The third-party benchmarking group Synthetic Evaluation verified these claims, putting MiMo-V2-Professional at #10 on its international Intelligence Index with a rating of 49. This locations it in the identical tier as GPT-5.2 Codex and forward of Grok 4.20 Beta. These outcomes recommend that Xiaomi has efficiently constructed a mannequin able to the high-level reasoning required for engineering and manufacturing duties.
Key metrics from Synthetic Evaluation spotlight a major leap over the earlier open-weights model, MiMo-V2-Flash (which scored 41):
-
Hallucination fee: The Professional mannequin diminished hallucination charges to 30%, a pointy enchancment over the Flash mannequin’s 48%.
-
Omniscience index: It scored a +5, putting it forward of GLM-5 (+2) and Kimi K2.5 (-8).
-
Token effectivity: To run your complete Intelligence Index, MiMo-V2-Professional required solely 77M output tokens, considerably lower than GLM-5 (109M) or Kimi K2.5 (89M), indicating a extra concise and environment friendly reasoning course of.
Xiaomi’s personal charts additional emphasize its "Common Agent" and "Coding Agent" capabilities. On ClawEval, a benchmark for agentic scaffolds, the mannequin scored 61.5, approaching the efficiency of Claude Opus 4.6 (66.3) and considerably outpacing GPT-5.2 (50.0). In coding-specific environments like Terminal-Bench 2.0, it achieved an 86.7, suggesting excessive reliability when executing instructions in a stay terminal atmosphere.
How enterprises ought to consider MiMo-V2-Professional for utilization
For the personas outlined in modern AI organizations—from Infrastructure to Safety—MiMo-V2-Professional represents a paradigm shift within the "Worth-High quality" curve.
Infrastructure decision-makers will discover MiMo-V2-Professional a compelling candidate for the Pareto frontier of intelligence vs. value. Synthetic Evaluation reported that working their index value solely $348 for MiMo-V2-Professional, in comparison with $2,304 for GPT-5.2 and $2,486 for Claude Opus 4.6.
For organizations managing GPU clusters or procurement, the flexibility to entry top-10 international intelligence at roughly 1/seventh the price of Western incumbents is a strong incentive for production-scale testing.
Information decision-makers can leverage the 1M context window for RAG-ready architectures, permitting them to feed complete enterprise codebases or documentation units right into a single immediate with out the fragmentation required by smaller context fashions.
A techniques/orchestration decision-maker ought to consider MiMo-V2-Professional as a major "mind" for multi-agent coordination. As a result of the mannequin is optimized for OpenClaw and Claude Code, it may well deal with long-horizon planning and exact software use with out the fixed human intervention that plagues earlier fashions.
Its excessive rating in GDPval-AA suggests it’s notably well-suited for the workflow and orchestration layer wanted to scale AI throughout the enterprise. It permits for the creation of techniques that may transfer past easy automation into complicated, multi-step downside fixing.
Nevertheless, safety decision-makers should train warning. The very "agentic" nature that makes the mannequin highly effective—its capability to make use of terminals and manipulate information—will increase the floor space for immediate injection and unauthorized mannequin entry.
Whereas its low hallucination fee (30%) is a defensive boon, the shortage of public weights (in contrast to the Flash model) means inside safety groups can not carry out the deep "model-level" audits typically required for extremely delicate deployments. Any enterprise implementation should be accompanied by sturdy monitoring and auditability protocols.
Pricing, availability, and the trail ahead
Xiaomi has priced MiMo-V2-Professional to dominate the developer market. The pricing is tiered primarily based on context utilization, with aggressive charges for caching to assist high-frequency reasoning duties.
-
MiMo-V2-Professional (as much as 256K): $1 per 1M enter tokens and $3 per 1M output tokens
-
MiMo-V2-Professional (256K-1M): $2 per 1M enter tokens and $6 per 1M output tokens
-
Cache learn: $0.20 per 1M tokens for the decrease tier and $0.40 for the upper tier
-
Cache write: Briefly free ($0)
Right here's the way it stacks as much as different main frontier fashions world wide:
|
Mannequin |
Enter |
Output |
Whole Value |
Supply |
|
Grok 4.1 Quick |
$0.20 |
$0.50 |
$0.70 |
|
|
MiniMax M2.7 |
$0.30 |
$1.20 |
$1.50 |
|
|
Gemini 3 Flash |
$0.50 |
$3.00 |
$3.50 |
|
|
Kimi-K2.5 |
$0.60 |
$3.00 |
$3.60 |
|
|
MiMo-V2-Professional (≤256K) |
$1.00 |
$3.00 |
$4.00 |
|
|
GLM-5-Turbo |
$0.96 |
$3.20 |
$4.16 |
|
|
GLM-5 |
$1.00 |
$3.20 |
$4.20 |
|
|
Claude Haiku 4.5 |
$1.00 |
$5.00 |
$6.00 |
|
|
Qwen3-Max |
$1.20 |
$6.00 |
$7.20 |
|
|
Gemini 3 Professional |
$2.00 |
$12.00 |
$14.00 |
|
|
GPT-5.2 |
$1.75 |
$14.00 |
$15.75 |
|
|
GPT-5.4 |
$2.50 |
$15.00 |
$17.50 |
|
|
Claude Sonnet 4.5 |
$3.00 |
$15.00 |
$18.00 |
|
|
Claude Opus 4.6 |
$5.00 |
$25.00 |
$30.00 |
|
|
GPT-5.4 Professional |
$30.00 |
$180.00 |
$210.00 |
This aggressive positioning is designed to encourage the high-intensity utility flows that outline the subsequent era of software program. The mannequin is at the moment obtainable by way of Xiaomi’s first-party API solely, with no present assist for picture or multimodal enter—a notable omission in an period of "Omni" fashions, although Xiaomi has teased a separate MiMo-V2-Omni for these wants.
The "Hunter Alpha" interval on OpenRouter proved that the market has a excessive urge for food for this particular mix of effectivity and reasoning. Fuli Luo’s philosophy—that analysis velocity is fueled by a "real love for the world you're constructing for"—has resulted in a mannequin that ranks 2nd in China and eighth worldwide on established intelligence indices.
Whether or not it stays a "quiet" ambush or turns into the inspiration for a world realignment of AI energy depends upon how rapidly builders undertake the "motion house" over the "chat window". For now, Xiaomi has moved the goalposts: the query is now not simply "can it discuss?" however "can it act?"

