ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
76% Positive
Analyzed from 1960 words in the discussion.
Trending Topics
#models#frontier#model#open#don#access#deepseek#real#best#chinese

Discussion (35 Comments)Read Original on HackerNews
There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough. Deepseek 4 and Kimi 2.5 are not quite Claude 4.5/GPT5.5 but there's no fundamental principle missing - they are strong evidence that there's no real advantage the "frontier" labs possess that isn't related to scale, which they will gain in time (if they even need to). The RL post-training techniques that work are widely known and easily copied. All Deepseek is really lacking is data, which they're getting - and the harder Anthropic/the USG makes it to access claude in china, the more of that precious data they'll get!
I used to sort of entertain the "fast take-off breakaway" scenario as being plausible but not really anymore. The only genuine moat the frontier labs have is their product take-up, which isn't nothing, far from it, but it's not some unbreakable technological wall. Too late guys - it might have been too late for quite some time.
However, they just don't perform that well in practice. That's the real issue. You can actually see it when you move away from open benchmarks. Deep seek 3.2 is 4% on Arc-AGI 2 [1], while GPT 5.2 high is 52% and GPT 5.5 pro high is 84.6%. That's the real reason why nobody is using these models for serious work. It's incredibly frustrating.
In addition, I already feel the pain myself on the model restriction. I'll asking my codex 5.5 agent to crawl a website - BOOM, cybersecurity warning on my account. I'll ask it to fix SSH on my local network - another warning. I'm worried about the day my account would be randomly banned and I cannot create a new one. OpenAI already asks you to perform full identification in order to eliminate these warnings - probably exactly for that - so that if they ban you, it's permanent.
[1] https://arcprize.org/leaderboard
The Chinese models right now are in a weird spot. Compared to the frontiers, both their pre and post training is woeful - tiny, resource constrained in every dimension including human, slow. I'd compare it to OpenAI 5 years ago except I think even then OpenAI had way more!
But they "cheat" quite a lot in distillation and very benchmark-focussed RL and that's where you get this superficial quality in the leaderboards that doesn't match up when you go off-script. Arc is a great example in that it really belies an "inferior soul" at the heart of it all.
What gives me great hope though is that those same scaling laws that Altman and others have been hyping forever will absolutely kick in for the Chinese labs just as they did for the US ones, and I don't think anything can stop that process now. So they will catch up. It won't be tomorrow, but it's not going to be 10 years either. 3-5 would be my reasonably educated guess.
And the final risk, that China itself might try to restrict availability of the tsunami of GPU or other AI hardware it will inevitably produce - well, I just can't really imagine a country that has been configuring itself for the last 40 years as a single purpose export machine deciding that actually, no, it doesn't want to export something.
About the model restrictions - absolutely. I've been trying to do security research on my own software and the frontier models immediately get suspicious. I've been playing with the local ones much more this year basically because of this. They have deficiencies, for sure - they feel very "hollow" compared to the major labs. But I've talked to a lot of people, and the consensus is pretty clear - just a matter of time.
Likewise Qwen 3.6 absolutely blows me away and that’s on a 35b 6-bit model on a local 5090. Same thing, busy trying to find stuff to do to keep it busy 24/7.
I can still find some niches for Opus 4.7 but being able to attack problems and not worry about consumption is a game changer.
I will say, as pointed out by others, DeepSeek and other Chinese providers still lack a bit in the tooling that Claude has, but they'll get there.
This is not just about mainland China though. The current US government is extremely selfish and self-centered. Other countries really need to consider for their own long-term situation here.
Also, he concedes Mythos-level capabilities will be cheap next year, then handwaves it with "you need the best AI, not good-enough AI." For most use cases, frontier minus six months is fine.
Open models that are competitive with frontier will be used on shared hosts.
And with the extreme chip shortages for the next two years, there's little appetite for even bigger models anyway.
Barring a breakthrough in scaling, the only direction the models can really go is smaller. Which will inevitably mean better performing local models for same chip budget.
edit: I'm specifically referring to the "5.5 Pro" model, not regular 5.5 with Pro tier subscription. Claude has no model available that's comparable to 5.5 Pro either.
The thing is, vast majority of code tasks aren’t a venture into the unknown. We as an industry for the most part build CRUD interfaces and dashboards. That can be achieved, with supervision, with frontier open-weights models quite well.
There's also an additional economic concern that rarely gets mentioned: because no one has cracked continual learning, keeping models up-to-date and filling in gaps in performance requires retraining on an ever growing dataset. Granted, you aren't starting from scratch each time, but the scaling required just to stay relevant looks daunting.
I don't know where any this goes on a societal level, but I've believed since the release of deepseek r1 that access to frontier models would eventually be locked up behind contracts, since the only moats protecting the models themselves are purely artificial. It remains to be seen how effective China is at pushing the envelope, and whether they are interested in providing unfettered access. And on top of that, it remains to be seen how well these models actually turn out to scale in the long run.
> “The two AI superpowers are going to start talking. We’re going to set up a protocol in terms of how do we go forward with best practices for AI to make sure nonstate actors don’t get a hold of these models,” Bessent told Joe Kernen on Thursday, on the sidelines of President Donald Trump’s two-day meeting in Beijing with Chinese President Xi Jinping.
https://www.cnbc.com/2026/05/14/us-china-ai-rules-bessent-us...
OpenAI is already talking openly about gated access to their models (see this OpenAI podcast episode for example: https://openai.com/podcast/#oai-podcast-episode-16)
Separately there's also a very active effort to stop open weight releases.
It's dangerous to those who think access to frontier intelligence is important.
Would that make those countries more attractive to young people perhaps? As a place to grow and learn skills where the opportunities are non-existent in the AI Sovereign countries.
Glad to see others catching on.
The Trump Administration telling the very neo-fascist oligarchs who bought him an election and bought him a ballroom to play nice with their toys? At the expense of rampant capitalism? Lol.
He already showed us the limit of his comprehension of the topic when he made EO 14179 limiting states from regulating AI.
Trump doesn't swing for perfect pitches. He is a madman, a lunatic, and a true moron. Do not give this man any credit. I would be shocked if he could tell you the time on an analog clock.
Gotta low how it is ok to question the results of the latest presidential election but not the earlier one which is supposed to be sacrosanct, but again ok to question the one in 2016.
Somehow Trump is owned by capitalists but also starts trade/actual wars that thwart their agenda. I never understood how people come up with simplistic reductionistic views full of inconsistencies. Won't the evil capitalists and Neo fascists be served better with a predictable/controllable president?