China's GLM-5 Just Claimed the Open-Source Crown. Nobody Noticed.
While you were doom-scrolling about OpenAI drama, Zhipu quietly released the world's strongest open-source model.
While you were doom-scrolling about OpenAI drama, Zhipu quietly released the world's strongest open-source model.

Here's a fun fact: while Western tech Twitter was having another meltdown about whatever Sam Altman said this week, China just dropped the mic on the entire AI scene.
Meet GLM-5, Zhipu AI's latest creation — a 745-billion-parameter beast that's now officially the strongest open-source language model on the planet. And here's the kicker: barely anyone noticed.
Released on February 11th under an MIT licence (yes, proper open-source, not OpenAI's "we'll-call-it-open-because-we-like-the-name" nonsense), GLM-5 sailed straight to the top of Artificial Analysis's benchmarks, leaving Moonshot AI's previous champion in the dust.
Zhipu's stock (trading as Knowledge Atlas Technology in Hong Kong) absolutely exploded — up 30% to HK$405 in a single day. UCloud Tech, which provides computing infrastructure for Zhipu, hit its daily trading limit with a 20% surge. Chinese AI stocks just had their biggest rally in months.
But here's what's really significant: GLM-5 was trained entirely on Chinese Huawei chips. Not a single NVIDIA H100 in sight. While everyone else is scrambling for GPU allocations, Zhipu just proved you can build world-class AI with whatever hardware you've got.
The model boasts record-low hallucination rates using something called the "RL slime" technique. It's available through Z.ai's platform and costs about $0.80-$1.00 per million input tokens. That's cheaper than most Western alternatives and competitive with even the most aggressive pricing from OpenAI and Anthropic.
For context on why this matters commercially: at $0.80 per million tokens, running a mid-size customer service operation on GLM-5 would cost roughly $200/month. The equivalent OpenAI bill would be $800-1,200. For ecommerce companies running thousands of product descriptions, translations, or search queries through AI, the cost difference compounds fast.
As if that wasn't enough, DeepSeek decided this was the perfect time to remind everyone they exist too. They just upgraded their flagship model with a context window that jumped from 128,000 tokens to over 1 million — nearly a tenfold increase. Their knowledge cutoff now goes to May 2025.
DeepSeek V4 dropped February 17th, and word is it's particularly strong for coding tasks. They've integrated something called "Engram conditional memory technology" that can efficiently handle contexts exceeding one million tokens. That's enough to swallow your entire codebase and still ask for more.
The practical implications for commerce: a model with 1M+ context can ingest an entire product catalogue, a complete Shopify theme, or a full year of customer support transcripts in a single prompt. That enables workflows that simply weren't possible with 128K context windows.
Here's what the West is missing: this isn't just about better models. This is about technological independence. While Silicon Valley companies are burning through venture capital and fighting over NVIDIA chips, Chinese companies are quietly building their own ecosystem.
Huawei chips. Chinese training infrastructure. MIT-licensed models that anyone can use, modify, and deploy without asking permission from any Valley oligarch.
The Chinese AI race isn't just accelerating — it's accelerating independently. They're not waiting for export licences or hoping the next administration doesn't ban their favourite chips. They're just building.
The strategic implications are enormous. The US chip export ban was supposed to slow Chinese AI development by 2-3 years. Instead, it forced Chinese companies to build an independent supply chain that now competes at the frontier. The ban didn't create a gap — it created a competitor ecosystem with zero dependency on American technology.
For any business building on AI, this means the supply of competitive models just doubled. The pricing pressure from Chinese open-source models is real. And the geopolitical risk of depending entirely on US-based AI providers just got a lot more tangible.
Related: Anthropic's $30 Billion Valuation Isn't About AI. It's About Infrastructure.
Related: Your Favourite AI Startup Will Be Dead in 18 Months
GLM-5's quiet dominance should be a bloody wake-up call. While Western AI labs are playing theatrical politics and posting cryptic tweets about "the coming storm," China is actually shipping products that work.
No drama. No manifestos about artificial general intelligence. No reality distortion fields or messianic founder worship. Just: here's a model that's better than yours, it's open source, and oh, by the way, we built it with chips you tried to ban us from using.
The revolution is happening. It's just not happening where you're looking. And that 30% stock surge? That's just the market finally waking up to what should have been obvious months ago: the future of AI isn't going to be decided in San Francisco conference rooms. It's being built in Shenzhen labs, one benchmark at a time.