Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

76% Positive

Analyzed from 6410 words in the discussion.

Trending Topics

#models#opus#model#qwen#more#code#glm#claude#using#better

Discussion (238 Comments)Read Original on HackerNews

alex7oabout 6 hours ago
Ok I find it funny that people compare models and are like, opus 4.7 is SOTA and is much better etc, but I have used glm 5.1 (I assume this comes form them training on both opus and codex) for things opus couldn't do and have seen it make better code, haven't tried the qwen max series but I have seen the local 122b model do smarter more correct things based on docs than opus so yes benchmarks are one thing but reality is what the modes actually do and you should learn and have the knowledge of the real strengths that models posses. It is a tool in the end you shouldn't be saying a hammer is better then a wrench even tho both would be able to drive a nail in a piece of wood.
jxmesthabout 4 hours ago
The only reason I'm stuck with Claude and Chatgpt is because of their tool calling. They do have some pretty useful features like skills etc. I've tried using qwen and deepseek but they can't even output documents. How are you guys handling documents and excels with these tools? I'd love to switch tbh.
embedding-shapeabout 3 hours ago
> I've tried using qwen and deepseek but they can't even output documents

What agent harness did you use? Usually, "write_file", "shell_exec" or similar is two of the first tools you add to an agent harness, after read_file/list_files. If it doesn't have those tools, unsure if you could even call it a agent harness in the first place.

jxmesthabout 3 hours ago
Sorry for the confusion, I was actually talking about their Web based chat. Since most of my work is governance and docs, I just use their Web chats and they just refuse to output proper documents like Claude or Chatgpt do.
ecocentrikabout 3 hours ago
When was the last time you used Qwen models? Their 3.5 and 3.6 models are excellent with tool calling.
jxmesthabout 3 hours ago
I gave it a try a few weeks ago tbh, I'll give it another shot tho. I mainly use their Web chats since that's easier to use and previously, qwen, deepseek, kimi, all were unable to output proper docx files or use skills.
sscaryterryabout 3 hours ago
You can use GLM-5.1 with claude code directly, I use ccs, GLM-5.1 setup as plan, but goes via API key.
jwitthuhnabout 3 hours ago
I've been using qwen-code (the software, not to be confused with Qwen Code the service or Qwen Coder the model) which is a fork of gemini-cli and the tool use with Qwen models at least has been great.
estimator7292about 2 hours ago
You can use both codex and Claude CLI with local models. I used codex with Gemma4 and it did pretty well. I did get one weird session where the model got confused and couldn't decide which tools actually existed in its inventory, but usually it could use tools just fine.
ezekiel68about 3 hours ago
Qwen3-Coder produced much better rust code (that utilized rust's x86-64 vectorized extensions) a few months ago than Claude Opus or Google Gemini could. I was calling it from harnesses such as the Zed editor and trae CLI.

I was very impressed.

justincormackabout 2 hours ago
Codex is pretty good at Rust with x86 and arm intrinsics too, it replaced a bunch of hand written C/assembly code I was using. I will try Qwen and Kimi on this kind of task too.
ternaryoperatorabout 5 hours ago
The models test roughly equal on benchmarks, with generally small differences in their scores. So, it’s reasonable to choose the model based on other criteria. In my case, I’d switch to any vendor that had a decent plugin for JetBrains.
sirnicolazabout 3 hours ago
Consider that SWE benchmarking is mainly done with python code. It tells something
Moosdijkabout 4 hours ago
I wonder why glm is viewed so positively.

Every time I try to build something with it, the output is worse than other models I use (Gemini, Claude), it takes longer to reach an answer and plenty of times it gets stuck in a loop.

pkulakabout 3 hours ago
I've been running Opus and GLM side-by side for a couple weeks now, and I've been impressed with GLM. I will absolutely agree that it's slow, but if you let it cook, it can be really impressive and absolutely on the level of Opus. Keep in mind, I don't really use AI to build entire services, I'm mostly using it to make small changes or help me find bugs, so the slowness doesn't bother me. Maybe if I set it to make a whole web app and it took 2 days, that would be different.

The big kicker for GLM for me is I can use it in Pi, or whatever harness I like. Even if it was _slightly_ below Opus, and even though it's slower, I prefer it. Maybe Mythos will change everything, but who knows.

tasukiabout 2 hours ago
> The big kicker for GLM for me is I can use it in Pi, or whatever harness I like.

Yes, but... isn't the same true for Opus and all the other models too?

Mashimoabout 4 hours ago
I have used GLM 4.7, 5 and 5.1 now for about 3 month via OpenCode harness and I don't remember it every being stuck in a loop.

You have to keep it below ~100 000 token, else it gets funny in the head.

I only use it for hobby projects though. Paid 3 EUR per month, that is not longer available though :( Not sure what I will choose end of month. Maybe OpenCode Go.

spaceman_2020about 2 hours ago
I think it offers a very good tradeoff of cost vs competency

4.7 is better, but its also wildly expensive

Akira1364about 4 hours ago
IDK about GLM but GPT 5.4 Extra High has been great when I've used it in the VS Code Copilot extension, I see no actual reason Opus should consume 3x more quota than it the way it does
slopinthebagabout 3 hours ago
You're probably just holding it wrong.
cornedorabout 5 hours ago
I tried GLM and Qwen last week for a day. And some issues it could solve, while some, on surface relatively easy, task it just could not solve after a few tries, that Opus oneshotted this morning with the same prompt. It’s a single example ofcourse, but I really wanted to give it a fair try. All it had to do was create a sortable list in Magento admin. But on the other hand, GLM did oneshot a phpstorm plugin
dev_l1x_beabout 4 hours ago
Do you use Opus through the API or with subscription? Did you use OpenCode or Code?
cornedorabout 2 hours ago
Opus trough Claude Code, the Chinese models trough OpenCode Go, which seems like a great package to test them out.
odie5533about 2 hours ago
If you showed me code from GLM 5.1, Opus 4.6, and Kimi K2.6, my ranking for best model would be highly random.
dev_l1x_beabout 4 hours ago
Benchmarking is grossly misleading. Claude’s subscription with Code would not score this high on the benchmarks because how they lobotomized agentic coding.
solomatovabout 3 hours ago
>but I have seen the local 122b model do smarter more correct things based on docs than opus

Could you please share more about this

alex7o6 minutes ago
Maybe a bit misleading. I have used in in two places.

One Is for local opencode coding and config of stuff the other is for agent-browser use and for both it did better (opus 4.6) for the thing I was testing atm. The problem with opus at the moment I tired it was overthinking and moving itself sometimes I the wrong direction (not that qwen does overthink sometimes). However sometimes less is more - maybe turning thinking down on opus would have helped me. Some people said that it is better to turn it of entirely when you start to impmenent code as it already knows what it needs to do it doesn't need more distraction.

Another example is my ghostty config I learned from queen that is has theme support - opus would always just make the theme in the main file

FlyingSnakeabout 5 hours ago
I tried GLM5.1 last week after reading about it here. It was slow as molasses for routine tasks and I had to switch back to Claude. It also ran out of 5H credit limit faster than Claude.
bensyversonabout 5 hours ago
If you view the "thinking" traces you can see why; it will go back and forth on potential solutions, writing full implementations in the thinking block then debating them, constantly circling back to points it raised earlier, and starting every other paragraph with "Actually…" or "But wait!"
nothinkjustaiabout 4 hours ago
I see this with Opus too.
FlyingSnakeabout 4 hours ago
> "Actually…" or "But wait!"

You’re absolutely right!

Jokes apart, I did notice GLM doing these back and forth loops.

nothinkjustaiabout 4 hours ago
Z.ai’s cloud offering is poor, try it with a different provider.
OtomotOabout 5 hours ago
Many people averted religion (which I can get behind with), but have never removed the dogmatic thinking that lay at its root.

As so many things these days: It's a cult.

I've used Claude for many months now. Since February I see a stark decline in the work I do with it.

I've also tried to use it for GPU programming where it absolutely sucks at, with Sonnet, Opus 4.5 and 4.6

But if you share that sentiment, it's always a "You're just holding it wrong" or "The next model will surely solve this"

For me it's just a tool, so I shrug.

balls187about 5 hours ago
> I've used Claude for many months now. Since February I see a stark decline in the work I do with it.

I find myself repeating the following pattern: I use an AI model to assist me with work, and after some time, I notice the quality doesn't justify the time investment. I decide to try a similar task with another provider. I try a few more tests, then decide to switch over for full time work, and it feels like it's awesome and doing a good job. A few months later, it feels like the model got worse.

runarbergabout 5 hours ago
I wonder about this. I see two obvious possibilities (if we ignore bias):

1. The models are purposefully nerfed, before the release of the next model, similar to how Apple allegedly nerfed their older phones when the next model was out.

2. You are relying more and more on the models and are using your talent less and less. What you are observing is the ratio of your vs. the model’s work leaning more and more to the model’s. When a new model is released, it produces better quality code then before, so the work improves with it, but your talent keeps deteriorating at a constant rate.

e12eabout 4 hours ago
I think it might have to do with how models work, and fundamental limits with them (yes, they're stochastic parrots, yes they confabulate).

Newer (past two years?) models have improved "in detail" - or as pragmatic tools - but they still don't deserve the anthropomorphism we subject them to because they appear to communicate like us (and therefore appear to think and reason, like us).

But the "holes" are painted over in contemporary models - via training, system prompts and various clever (useful!) techniques.

But I think this leads us to have great difficulty spotting the weak spots in a new, or slightly different model - but as we get to know each particular tool - each model - we get better at spotting the holes on that model.

Maybe it's poorly chosen variable names. A tendency to write plausible looking, plausibly named, e2e tests that turns out to not quite test what they appear to test at first glance. Maybe there's missing locking of resources, use of transactions, in sequencial code that appear sound - but end up storing invalid data when one or several steps fail...

In happy cases current LLMs function like well-intentioned junior coders enthusiasticly delivering features and fixing bugs.

But in the other cases, they are like patholically lying sociopaths telling you anything you want to hear, just so you keep paying them money.

When you catch them lying, it feels a bit like a betrayal. But the parrot is just tapping the bell, so you'll keep feeding it peanuts.

taurathabout 5 hours ago
I agree - the problem is it’s hard to see how people who say they’re using it effectively actually are using it, what they’re outputting, and making any sort of comparison on quality or maintainability or coherence.

In the same way, it’s hard to see how people who say they’re struggling are actually using it.

There’s truth somewhere in between “it’s the answer to everything” and “skill issue”. We know it’s overhyped. We know that it’s still useful to some extent, in many domains.

balls187about 4 hours ago
Well summarized.

We're also seeing that the people up top are using this to cull the herd.

psychoslaveabout 5 hours ago
What is it that is dogma free? If one goes hardcore pyrrhonism, doubting that there is anything currently doubting as this statement is processed somehow, that is perfectly sound.

At some point the is a need to have faith in some stable enough ground to be able to walk onto.

Wolfbetaabout 4 hours ago
Who controls that need for you?
ecshaferabout 5 hours ago
All people think dogmatically. The only difference is what the ontological commitments and methaphysical foundations are. Take out God and people will fit politics, sports teams, tools, whatever in there. Its inescapable.
smallmancontrovabout 4 hours ago
All people think dogmatically, but religion does not prevent people from acting dogmatically in politics, sports, etc. It just doesn't. It never did.

Under normal circumstances I'd consider this a nit and decline to pick it, but the number of evangelists out there arguing the equivalent of "cure your alcohol addiction with crystal meth!" is too damn high.

bensyversonabout 5 hours ago
Allow me to introduce you to Buddhism
OtomotOabout 5 hours ago
Dogmatism is a spectrum and for too many people it's on the animal side of the scale.
taneqabout 5 hours ago
I wonder to what degree it depends on how easy you find coding in general. I find for the early steps genAI is great to get the ball rolling, but rapidly it becomes more work to explain what it did wrong and how to fix it (and repeat until it does so) than to just fix the code myself.
ninjahawk1about 6 hours ago
The way to develop in this space seems to be to give away free stuff, get your name out there, then make everything proprietary. I hope they still continue releasing open weights. The day no one releases open weights is a sad day for humanity. Normal people won’t own their own compute if that ever happens.
culiabout 5 hours ago
I think that's an overgeneralization. We've seen all the American models be closed and proprietary from the start. Meanwhile the non-American (especially the Chinese ones) have been open since the start. In fact they often go the opposite direction. Many Chinese models started off proprietary and then were later opened up (like many of the larger Qwen models)
robot_jesusabout 5 hours ago
> We've seen all the American models be closed and proprietary from the start

What about Gemma and Llama and gpt-oss, not to mention lots of smaller/specialized models from Nvidia and others?

I would never argue that China isn't ahead in the open weights game, of course, but it's not like it's "all" American models by any stretch.

walthamstowabout 5 hours ago
gpt-oss is good but I haven't heard anything about an update. It seems like one and done, to shut up people complaining about non-Open AI
embedding-shapeabout 5 hours ago
> We've seen all the American models be closed and proprietary from the start.

Most*.

OpenAI, contrary to popular belief, actually used to believe in open research and (more or less) open models. GPT1 and GPT2 both were model+code releases (although GPT2 was a "staged" release), GPT3 ended up API-only.

zozbot234about 5 hours ago
OpenAI has released their GPT-OSS series more recently.
culiabout 5 hours ago
That's fair but those days seem so long gone now.

Also the Chinese models aren't following a typical American SaaS playbook which relies on free/cheap proprietary software for early growth. They are not just publishing their weights but also their code and often even publishing papers in Open Access journals to explicitly highlight what methods and advancements were made to accomplish their results

visargaabout 6 hours ago
I think it is in the interest of chip makers to make sure we all get local models
qalmakkaabout 6 hours ago
I think they're in a win-win situation. Big AI companies would love to see local computing die in favour of the cloud because they are well aware the moment an open model that can run on non ludicrous consumer hardware appears, they're screwed. In this situation Nvidia, AMD and the like would be the only ones profiting from it - even though I'm not convinced they'd prefer going back to fighting for B2C while B2B Is so much simpler for them
zozbot234about 6 hours ago
If you want to run AI models at scale and with reasonably quick response, there's not many alternatives to datacenter hardware. Consumer hardware is great for repurposing existing "free" compute (including gaming PCs, pro workstations etc. at the higher end) and for basic insurance against rug pulls from the big AI vendors, but increased scale will probably still bring very real benefits.
BobbyJoabout 6 hours ago
At a consistent amount of usage, datacenters are at least an order of magnitude more hardware efficient. I'm sure Nvidia and AMD would be fine fighting for B2C if it meant volume would be 10+x.

Now, given they can't satisfy current volume, they are forced to settle for just having crazy margins.

zozbot234about 6 hours ago
Definitely. Many big hardware firms are directly supporting HuggingFace for this very reason.
ninjahawk1about 6 hours ago
True, chip companies have the opposite mindset, Nvidia is making their own open weights I believe
elorantabout 5 hours ago
This is obviously a strategic move at a national level. Keep publishing competing free models to erode the moat western companies could have with their proprietary models. As long as the narrative serves China there will be no turn to proprietary models.
baqabout 6 hours ago
Always has been, it’s literally saas; the slight difference is that the lowest tier subscriptions at the frontier labs are basically free trials nowadays, too
Zavoraabout 6 hours ago
Its the new freeware model!
CamperBob2about 6 hours ago
I'm a little more optimistic than that. I suspect that the open-weight models we already have are going to be enough to support incremental development of new ones, using reasonably-accessible levels of compute.

The idea that every new foundation model needs to be pretrained from scratch, using warehouses of GPUs to crunch the same 50 terabytes of data from the same original dumps of Common Crawl and various Russian pirate sites, is hard to justify on an intuitive basis. I think the hard work has already been done. We just don't know how to leverage it properly yet.

theszabout 5 hours ago
Change layer size and you have to retrain. Change number of layers and you have to retrain. Change tokenization and you have to retrain.
dTalabout 5 hours ago
None of that is true, at least in theory. You can trivially change layer size simply by adding extra columns initialized as 0, effectively embedding your smaller network in a larger network. You can add layers in a similar way, and in fact LLMs are surprisingly robust to having layers added and removed - you can sometimes actually improve performance simply by duplicating some middle layers[0]. Tokenization is probably the hardest but all the layers between the first and last just encode embeddings; it's probably not impossible to retrain those while preserving the middle parts.

[0] https://news.ycombinator.com/item?id=47431671 https://news.ycombinator.com/item?id=47322887

altruiosabout 5 hours ago
Hopefully we will find a way to make it so that making minor changes don't require a full retrain. Training how to train, as a concept, comes to mind.
CamperBob2about 4 hours ago
And yet the KL divergence after changing all that stuff remains remarkably similar between different models, regardless of the specific hyperparameters and block diagrams employed at pretraining time. Some choices are better, some worse, but they all succeed at the game of next-token prediction to a similar extent.

To me, that suggests that transformer pretraining creates some underlying structure or geometry that hasn't yet been fully appreciated, and that may be more reusable than people think.

Ultimately, I also doubt that the model weights are going to turn out to be all that important. Not compared to the toolchains as a whole.

pduggishettiabout 5 hours ago
I do not think it's common crawl anymore, its common crawl++ using paid human experts to generate and verify new content, weather its code or research.

I believe US is building this off the cost difference from other countries using companies like scale, outlier etc, while china has the internal population to do this

testbjjlabout 6 hours ago
Any reason for them to do this other than altruism? I don’t think this can be regulated.
Rohansiabout 6 hours ago
Bake ads into them.
WarmWashabout 6 hours ago
The Chinese state wants the world using their models.

People think that Chinese AI labs are just super cool bros that love sharing for free.

The don't understand it's just a state sponsored venture meant to further entrench China in global supply and logistics. China's VCs are Chinese banks and a sprinkle of "private" money. Private in quotes because technically it still belongs to the state anyway.

China doesn't have companies and government like the US. It just has government, and a thin veil of "company" that readily fool westerners.

subw00fabout 6 hours ago
As opposed to the US, which just has companies and a thin veil of “government”.
culiabout 5 hours ago
Also many of these Chinese companies aren't just opening their weights. They are open sourcing their code AND publishing detailed research papers alongside them to reveal how they accomplished what they accomplished.

That's very different from an American SaaS model which relies of free but proprietary software for early growth

zozbot234about 6 hours ago
I'm not sure how local AI models are meant to "entrench China in global supply and logistics". The two areas have nothing to do with one another. You can easily run a Chinese open model on all-American hardware.
WarmWashabout 5 hours ago
They are building a pipeline, and the goal is to get people in the door.

If you forever stand at the entrance eating the free samples, that's fine, they don't care. Other people are going through the door and you are still consuming what they feed you. Doesn't mean it's going to be bad or evil, but they are staking their territory of control.

jillesvangurpabout 5 hours ago
Like with nuclear technology, it's not healthy for only one country to dominate AI. The cat is already out of the bag and many countries now have the ability to train and run models. Silicon Valley has bootstrapped this space. But it should be noted that they are using AI talent from all over the world and it was sort of inevitable that this technology would get around. Lots of Chinese, Indian, Russian, and Europeans are involved.

As for what comes next, it's probably going to be a bit of a race for who can do the most useful and valuable things the cheapest. If OpenAI and Anthropic don't make it, the technology will survive them. If they do, they'll be competing on quality and cost.

As for state sponsorship, a lot of things are state sponsored. Including in the US. Silicon Valley has a rich history that is rooted in massive government funding programs. There's a great documentary out there the secret history of Silicon Valley on this. Not to mention all the "cheap" gas that is currently powering data centers of course comes on the back of a long history of public funding being channeled into the oil and gas industry.

WarmWashabout 5 hours ago
>As for state sponsorship, a lot of things are state sponsored.

You can make any comparison you want if you use adjectives rather than values. I can say that cars use a massive amount of water (all those radiators!) to try and downplay agricultural water usage. But its blatantly disingenuous.

SV is overwhelmingly private (actual constitutional private) money. To the point that you should disregard people saying otherwise, just like you would the people saying cars use massive amounts of water.

OtomotOabout 5 hours ago
So an OPEN model that I can run on my own fucking hardware will entrench China in global supply and logistics how?

Contrary: How will the closed, proprietary models from Anthropic, "Open"AI and Co. lead us all to freedom? Freedom of what exactly? Freedom of my money?

At some point this "anti-communism" bullshit propaganda has to stop. And that moment was decades ago!

Zetaphorabout 5 hours ago
Anything that isn't explicitly to the benefit of US interests must be against them /s
grttswwabout 6 hours ago
So what?

I still prefer that over US total dominance.

Let them fight it out.

joquarkyabout 5 hours ago
Yeah, a lot of people are still living within the paradigm of tribalism: my team good, other team bad.

But the events of the past decade or so have clearly demonstrated that there are no "good" actors.

I personally couldn't care less who wins in the China vs US AI competition, both sides have a long list of pros and cons.

spwa4about 5 hours ago
I'd get a bit informed about what exactly Chinese dominance entails. Ask a few Uyghurs, Cantonese Hong Kongers, or even Tibetans.

Then decide ...

darkwaterabout 6 hours ago
Well, isn't this what the US and really any other power in the world has always done, since forever?
ai_fry_ur_brainabout 5 hours ago
Why is it sad? These things are useles all around, along with the people who overuse them.

It would be a great day for humanity if people would stopping glazing text autocomplete as revolutionary.

seanw265about 2 hours ago
Kimi K2.6 also released today. I think it's fair to compare the two models.

Qwen appears to be much more expensive:

- Qwen: $1.3 in / $7.8 out

- Kimi: $0.95 in / $4 out

--

The announcement posts only share two overlapping benchmark results. Qwen appears to score slightly lower on SWE-Bench Pro and Terminal-Bench 2.0.

Qwen:

- Teminal-Bench 2.0: 65.4

- SWE-Bench Pro: 57.3

Kimi:

- Terminal-Bench 2.0: 66.8

- SWE-Bench Pro: 58.6

--

Different models have different strong suits, and benchmarks don't cover everything. But from a numbers perspective, Kimi looks much more appealing.

mchusmaabout 1 hour ago
i think as the pricing has gone up on the Chinese models it has made them less appealing, and with the introduction of Gemma-4 not many are at the pareto frontier (also in my experience, not just the stats): https://arena.ai/leaderboard/text/overall?viewBy=plot
fr3onabout 2 hours ago
The irony of this announcement is in the name: Max-Preview is proprietary, cloud-only. The Qwen models that actually matter — the ones running on real hardware people own — are the open weights series. I run the 32B and 72B variants locally on dual A4000s. The gap between those and the hosted Max is real, but it's shrinking with every release. The interesting question isn't how Max compares to Opus. It's how long until the open-weight tier makes the cloud tier irrelevant for most workloads.
sva_about 1 hour ago
bad bot
djyde29 minutes ago
I've been using glm5.1 for pretty much all my coding work, but Claude is too expensive for me. Haven't tried qwen yet though. China's coding models are now very cost-effective.
0xbadcafebeeabout 7 hours ago
Everybody's out here chasing SOTA, meanwhile I'm getting all my coding done with MiniMax M2.5 in multiple parallel sessions for $10/month and never running into limits.
Aurornisabout 6 hours ago
For serious work, the difference between spending $10/month and $100/month is not even worth considering for most professional developers. There are exceptions like students and people in very low income countries, but I’m always confused by developers with in careers where six figure salaries are normal who are going cheap on tools.

I find even the SOTA models to be far away from trustworthy for anything beyond throwaway tasks. Supervising a less-than-SOTA model to save $10 to $100 per month is not attractive to me in the least.

I have been experimenting with self hosted models for smaller throwaway tasks a lot. It’s fun, but I’m not going to waste my time with it for the real work.

zozbot234about 6 hours ago
You need to supervise the model anyway, because you want that code to be long-term maintainable and defect free, and AI is nowhere near strong enough to guarantee that anytime soon. Using the latest Opus for literally everything is just a huge waste of effort.
senordevnycabout 4 hours ago
Yes, but I find supervision much easier and faster with a strong model. It makes fewer dumb mistakes that I have to catch and correct, and it’ll follow my instructions more reliably.
dandakaabout 6 hours ago
Waste of effort... of Opus? If "Opus effort" is cheaper, than dev hours managing yourself more dumb/effective model, what is the point?
slopinthebagabout 2 hours ago
$100 / month will get you rate limited to much to rely on with the Claude plans. People still report getting rate limited on the $200 / plan.

Also not everyone wants to use Claude Code, so if they're paying API pricing it's more likely thousands of dollars a month. If you can get the same results by spending a fraction of that, why wouldn't you?

AnonymousPlanetabout 5 hours ago
For actually serious work, it's a stark difference if your proprietary and security relevant code is sent abroad to a foreign, possibly future hostile country, or is sent to some data center around the corner. It doesn't even need to be defence related.
flatlineabout 5 hours ago
AFAIK all these companies have SOTA or near-SOTA models available under enterprise licenses. AI companies are not interested in your secret sauce, they are trying to capture the SDLC wholesale.
chatmastaabout 4 hours ago
Who are you paying $10/month? OpenRouter?
xutopiaabout 3 hours ago
How do you use this? Do you use opencode or another frontend?
jdw64about 5 hours ago
https://www.alibabacloud.com/help/en/model-studio/context-ca... I’ve also been testing models like Opus, Codex, and Qwen, and Qwen is strong in many coding tasks. However, my main concern is how it behaves in long-running sessions.

While Qwen advertises large context windows, in practice the effectiveness of long-context usage seems to depend heavily on its context caching behavior. According to the official documentation, Qwen provides both implicit and explicit context caching, but these come with constraints such as short TTL (around a few minutes), prefix-based matching, and minimum token thresholds.

Because of these constraints, especially in workflows like coding agents where context grows over time, cache reuse may not scale as effectively as expected. As a result, even though the per-token price looks low, the effective cost in long sessions can feel higher due to reduced cache hit rates and repeated computation.

That said, in certain areas such as security-related tasks, I’ve personally had cases where Qwen performed better than Opus.

In my personal experience, Qwen tends to perform much better than Opus on shorter units like individual methods or functions. However, when looking at the overall coding experience, I found it works better as a function-level generator rather than as an autonomous, end-to-end coding assistant like Claude.

ezekiel68about 3 hours ago
TBF, it's certainly best practice, advised by the model providers themselves, to cut sessions short and start new ones.

Anthropic's "Best Practices" doc[0] for Claude Code states, "A clean session with a better prompt almost always outperforms a long session with accumulated corrections."

[0] https://code.claude.com/docs/en/best-practices

hedoraabout 2 hours ago
Unless stuff changed since I last checked, context caching just reduces cost / latency. It does not change what tokens are emitted.
jjiceabout 7 hours ago
With them comparing to Opus 4.5, I find it hard to take some of these in good faith. Opus 4.7 is new, so I don't expect that, but Opus 4.6 has been out for quite some time.
SwellJoeabout 5 hours ago
The thing is, Opus 4.5 is where the model reached Good Enough, at least for a wide variety of problems I use LLMs for. Before that, I almost never thought it was a more productive use of my time to use AI for development tasks, because it would always hallucinate something that would waste a bunch of my time. It just wasn't a good trade.

But, if for some reason everything stopped at Opus 4.5 level and we never got a better model (and 4.6/4.7 are better, if only marginally so and mostly expanding the kind of work it can do rather than making it better at making web apps), we could still do a lot of real work real fast with Opus 4.5, and software development would never go back to everyone handwriting most of the code.

A model as good as Opus 4.5 (or slightly better according to the mostly easily gamed benchmarks) at a 10th the price is probably a worthwhile proposition for a lot of people. $100 a month, or more, to get Opus 4.7 is well worth it for a western developer...the time the lower-end models waste is far more expensive than the cost of using the most expensive models. For the foreseeable future, I'll keep paying a premium for the models that waste less of my time and produce better results with less prodding.

But, also, it's wild how fast things move. Open models you can run on relatively modest hardware are competitive with frontier models of two years ago. I mean, you can run Qwen 3.6 MoE 35B A3B or the larger Gemma 4 models on normal hardware, like a beefy Macbook or a Strix Halo or any recentish 24GB/32GB GPU...not much more expensive than the average developer laptop of pre-AI times. And, it can write code. It can write decent prose (Qwen is maybe better at code, Gemma definitely has better prose), they can use tools, they have a big enough context window for real work. They aren't as good as Opus 4.5, yet.

Anyway, I use several models at this point, for security and code reviews, even if Claude Code with Opus is still obviously the best option for most software development tasks. I'll give Qwen a try, too. I like their small models, which punch well above their weight, I'll probably like the big one, too.

Someone1234about 7 hours ago
If money is no object, then nothing else is worth considering if it isn't Codex 5.4/Opus 4.7/SOTA. But for many to most people, value Vs. relative quality are huge levers.

Even many people on a Claude subscription aren't choosing or able to choose Opus 4.7 because of those cost/usage pressures. Often using Sonnet or an older opus, because of the value Vs. quality curve.

dd8601fnabout 7 hours ago
Also us weirdos with local model uses. But your point stands.
sepliteabout 6 hours ago
Unfortunately, like with the release of Qwen3.6-Plus, this model also isn’t released for local use. From the linked article: “Qwen3.6-Max-Preview is the hosted proprietary model available via Alibaba Cloud Model Studio”
CamperBob2about 6 hours ago
Cost may or may not be a factor in my choice of model, but knowing the capabilities and knowing they will remain consistent, reliable, and available over time is always a dominant consideration. Lately, Anthropic in particular has not been great at that.
jpfromlondonabout 6 hours ago
anecdotally the quality of output isn't significantly different, the speed seems to be what you're really paying for, and since the alternative is free I'll stick to local.
paprikanotfoundabout 4 hours ago
What are the best models to run locally?
elAhmoabout 5 hours ago
Codex 5.4 is not out?
wahnfriedenabout 7 hours ago
Codex subscription is very generous at pro tiers
oidarabout 7 hours ago
Opus 4.6 performance has been so wildly inconsistent over the past couple of months, why waste the tokens?
vidarhabout 6 hours ago
When Sonnet 4.6 was released, I switchmed my default from Opus to Sonnet because it was about en par with Opus 4.5. While 4.6 and 4.7 are "better", the leap is too small for most tasks for me to need it, and so reducing cost is now a valid reason to stay at that level.

If even cheaper models start reaching that level (GLM 5.1 is also close enough that I'm using it at lot), that's a big deal, and a totally valid reason to compare against Opus 4.5

jasonjmcgheeabout 6 hours ago
Wow I couldn't disagree more.

For me, Opus 4.5 and 4.6 feel so different compared to sonnet.

Maybe I'm lazy or something but sonnet is much worse in my experience at inferring intent correctly if I've left any ambiguity.

That effect is super compounding.

hirako2000about 7 hours ago
You compare with what's most comparable.

In any case a benchmark provided by the provider is always biased, they will pick the frameworks where their model fares well. Omit the others.

Independent benchmarks are the go to.

culiabout 5 hours ago
Opus 4.6 was released in February. It can take quite some time to run all these benchmarks properly
alex_youngabout 7 hours ago
Quite some time is a little over 2 months. I understand this is actually true right now, but it’s still a bit hard to accept.
cute_boiabout 5 hours ago
Comparing it with Opus 4.6 is difficult, since Anthropic may ban accounts and accuse users of state-sponsored hacking.
bluegattyabout 6 hours ago
I think its only been like 10 weeks. I meant that's forever in AI time, but not a long time in normie people time.
wg0about 5 hours ago
Notice the pattern that Chinese providers are now:

1. Keeping models closed source.

2. Jacking up pricing. A lot. Sometimes up to 100% increase.

embedding-shapeabout 5 hours ago
Huh yeah, that's truly a unique trait these Chinese companies don't share with companies in other countries.
aerhardtabout 3 hours ago
No it is not, but they had a unique positioning around open-source and the parent commenter means that they are losing it.
nicceabout 3 hours ago
> Jacking up pricing. A lot. Sometimes up to 100% increase.

How is that different from American?

Tepixabout 4 hours ago
Are you talking about GLM 5.1, DeepSeek V3.2 or Kimi K2.6 (released one hour ago!)?

Oh wait, it doesn't apply to those…

Kerrickabout 3 hours ago
Z.ai's Coding Plan with GLM 5.1 (Max) did more than double in price. It was $80 two weeks ago, and now it's $160.
slopinthebagabout 3 hours ago
Coding plans are subsidised crap anyways, the real price win is the API pricing which is not.
dingocatabout 3 hours ago
Yet.
OtomotOabout 5 hours ago
US companies hate that trick?!
rc_kasabout 4 hours ago
you mean: invented
sunaookamiabout 3 hours ago
Yeah Claude Haiku (don't remember the version) did it first, they claimed it was because "it's smarter now" (it's still dumb). Then OpenAI did it with GPT-5 and Google did the same with Gemini Flash and now every new model version is at least twice as expensive than the one before that.
cute_boiabout 5 hours ago
Well, they can't subsidize forever. And, it is kinda expected?
gpmabout 4 hours ago
Considering the propaganda value in controlling the inputs to the machine that answers peoples questions, I rather expect them to be subsidized forever.
bigyabaiabout 4 hours ago
Consider the propaganda value of a centrally-controlled apparatus like the iPhone, and then reflect on the 100%+ profit margins that product has enjoyed for the past decade.
cnlwsuabout 4 hours ago
what only Oracle can do it?
ai_fry_ur_brainabout 5 hours ago
Yeah, its almost like the casinos started rigging the game after they got all the addicts hooked. Who saw that coming???

If you overuse LLMs or get excited about them at all, you're ngmi and a complete idiot.

trvzabout 7 hours ago
The fun thing is, you can be aware of the entire range of Qwen models that are available for local running, but not at all about their cloud models.

I knew of all the 3.5’s and the one 3.6, but only now heard about the Plus.

Alifatiskabout 6 hours ago
Their Plus series have existed since Qwen chat was available , as far as I remember. I can at least remember trying out their Plus model early last year.
atilimcetinabout 6 hours ago
Nowadays, I'm working on a realtime path tracer where you need proper understanding of microfacet reflection models, PDFs, (multiple) importance sampling, ReSTIR, etc.. Saying that mine is a somewhat specific use case.

And I use Claude, Gemini, GLM, Qwen to double check my math, my code and to get practical information to make my path tracer more efficient. Claude and Gemini failed me more than a couple of times with wrong, misleading and unnecessary information but on the other hand Qwen always gave me proper, practical and correct information. I’ve almost stopped using Claude and Gemini to not to waste my time anymore.

Claude code may shine developing web applications, backends and simple games but it's definitely not for me. And this is the story of my specific use case.

wg0about 6 hours ago
I have said similar things about someone experiencing similar things while writing some OpenGL code (some raytracing etc) that these models have very little understanding and aren't good at anything beyond basic CRUD web apps.

In my own experience, even with web app of medium scale (think Odoo kind of ERP), they are next to useless in understanding and modling domain correctly with very detailed written specs fed in (whole directory with index.md and sub sections and more detailed sections/chapters in separate markdown files with pointers in index.md) and I am not talking open weight models here - I am talking SOTA Claude Opus 4.6 and Gemini 3.1 Pro etc.

But that narrative isn't popular. I see the parallels here with the Crypto and NFT era. That was surely the future and at least my firm pays me in cypto whereas NFTs are used for rewarding bonusess.

wg0about 6 hours ago
Someone exactly said it better here[0] already.

[0]. https://news.ycombinator.com/item?id=47817982

amarcheschiabout 6 hours ago
a semester ago i was taking a machine learning exam in uni and the exam tasked us with creating a neural network using only numerical libraries (no pytorch ecc). I'm sure that there are a huge lot of examples looking all the same, but given that we were just students without a lot of prior experience we probably deviated from what it had in its training data, with more naive or weird solutions. Asking gemini 3 to refactor things or in very narrow things to help was ok, but it was quite bad at getting the general context, and spotting bugs, so much that a few times it was easier to grab the book and get the original formula right

otoh, we spotted a wrong formula regarding learning rate on wikipedia and it is now correct :) without gemini and just our intuition of "mhh this formula doesn't seem right", that definitely inflated our ego

zozbot234about 6 hours ago
What size of Qwen is that, though? The largest sizes are admittedly difficult to run locally (though this is an issue of current capability wrt. inference engines, not just raw hardware).
atilimcetinabout 6 hours ago
I'm directly using https://chat.qwen.ai (Qwen3.6-Plus) and planning to switch to Qwen Code with subscription.
muyuuabout 4 hours ago
for Anthropic and OpenAI there is a very real danger that people invest serious time finding the strengths of alternative models, esp Chinese/open models that can to some degree be run locally as well

it puts a massive backstop at the margins they can possibly extract from users

jasonjmcgheeabout 6 hours ago
You may be interested in "radiance cascades"
hedoraabout 2 hours ago
What do you use instead of the Claude code client app?
jansanabout 6 hours ago
How "social" does Quen feel? The way I am using LLMs for coding makes this actually the most important aspect by now. Claude 4.6 felt like a nice knowledgeable coworker who shared his thinking while solving problems. Claude 4.7 is the difficult anti-social guy who jumps ahead instead of actually answering your questions and does not like to talk to people in general. How are Qwen's social skills?
zozbot234about 6 hours ago
Qwen feels like wise Chinese philosopher. Talks in very short elegant sentences, but does very solid work.
Alifatiskabout 6 hours ago
> Talks in very short elegant sentences

This is not my experience at all, Qwen3.6-Plus spits out multiple paragraphs of text for the prompts I give. It wasn't like this before. Now I have to explicitly tell it not to yap so much and keep it short, concise and direct.

Advertisement
Aeroiabout 1 hour ago
why do people continue to benchmark their sota models against older models.
chatmastaabout 4 hours ago
Is this going to be an open weights model or not? The post doesn’t make it clear. It seems the weights are not available today, but maybe that’s because it’s in preview?
zozbot234about 4 hours ago
The Max series has never been open.
Orasabout 7 hours ago
I find it odd that none of OpenAI models was used in comparison, but used Z GLM 5.1. Is Z (GLM 5.1) really that good? It is crushing Opus 4.5 in these benchmarks, if that is true, I would have expected to read many articles on HN on how people flocked CC and Codex to use it.
ac29about 7 hours ago
GLM 5.1 is pretty good, probably the best non-US agentic coding model currently available. But both GLM 5.0 and 5.1 have had issues with availability and performance that makes them frustrating to use. Recently GLM 5.1 was also outputting garbage thinking traces for me, but that appears to be fixed now.
cmrdporcupineabout 7 hours ago
Use them via DeepInfra instead of z.ai. No reliability issues.

https://deepinfra.com/zai-org/GLM-5.1

Looks like fp4 quantization now though? Last week was showing fp8. Hm..

wolttamabout 6 hours ago
Deepinfra's implementation of it is not correct. Thinking is not preserved, and they're not responding to my submitted issue about it.

I also regularly experience Deepinfra slow to an absolute crawl - I've actually gotten more consistent performance from Z.ai.

I really liked Deepinfra but something doesn't seem right over there at the moment.

coder68about 6 hours ago
In fact it is appreciated that Qwen is comparing to a peer. I myself and several eng I know are trying GLM. It's legit. Definitely not the same as Codex or Opus, but cheaper and "good enough". I basically ask GLM to solve a program, walk away 10-15 minutes, and the problem is solved.
Orasabout 6 hours ago
cheaper is quite subjective, I just went to their pricing page [0] and cost saving compared to performance does not sell it well (again, personal opinion).

CC has a limited capacity for Opus, but fairly good for Sonnet. For Codex, never had issues about hitting my limits and I'm only a pro user.

https://z.ai/subscribe

kardianosabout 7 hours ago
Yes. GLM 5.1 is that good. I don't think it is as good as Claude was in January or February of this year, but it is similar to how Claude runs now, perhaps better because I feel like it's performance is more consistent.
vidarhabout 6 hours ago
GLM 5.1 is the first model I've found good enough to spring for a subscription for other than Claude and Codex.

It's not crushing Opus 4.5 in real-life use for me, but it's close enough to be near interchangeable with Sonnet for me for a lot of tasks, though some of the "savings" are eaten up by seemingly using more tokens for similar complexity tasks (I don't have enough data yet, but I've pushed ~500m tokens through it so far.

prosabout 7 hours ago
I'm using GLM 5.1 for the last two weeks as a cheaper alternative to Sonnet, and it's great - probably somewhere between Sonnet and Opus. It's pretty slow though.
bensyversonabout 5 hours ago
This is what kills it for me… The long thinking blocks can make a simple task take 30 minutes.
culiabout 5 hours ago
If you only look at open models, GLM 5.1 is the best performance you can get on on the Pareto distribution

https://arena.ai/leaderboard/text?viewBy=plot&license=open-s...

Alifatiskabout 6 hours ago
GLM-5 is good, like really good. Especially if you take pricing into consideration. I paid 7$ for 3 months. And I get more usage than CC.

They have difficulty supplying their users with capacity, but in an email they pointed out that they are aware of it. During peak hours, I experience degraded performance. But I am on their lowest tier subscription, so I understand if my demand is not prioritized during those hours.

ekuckabout 6 hours ago
Where are you getting 3 months for $7?
Alifatiskabout 5 hours ago
They had a Christmas deal that ended January 31.
c0n5pir4cyabout 7 hours ago
I've been using it through OpenCode Go and it does seem decent in my limited experience. I haven't done anything which I could directly compare to Opus yet though.

I did give it one task which was more complex and I was quite impressed by. I had a local setup with Tiltdev, K3S and a pnpm monorepo which was failing to run the web application dev server; GLM correctly figured out that it was a container image build cache issue after inspecting the containers etc and corrected the Tiltfile and build setup.

cleaningabout 6 hours ago
Most HN commenters seem to be a step behind the latest developments, and sometimes miss them entirely (Kimi K2.5 is one example). Not surprising as most people don't want to put in the effort to sift through the bullshit on Twitter to figure out the latest opinions. Many people here will still prefer the output of Opus 4.5/4.6/4.7, nowadays this mostly comes down to the aesthetic choices Anthropic has made.
Orasabout 6 hours ago
Not just aesthetics though, from time to time I implement the same feature with CC and Codex just to compare results, and I yet to find Codex making better decisions or even the completeness of the feature.

For more complicated stuff, like queries or data comparison, Codex seems always behind for me.

throwaw12about 7 hours ago
maybe they decided OpenAI has different market, hence comparing only with companies who are focusing in dev tooling: Claude, GLM
edwinjmabout 7 hours ago
Haven’t you heard about Codex?
throwaw12about 7 hours ago
its an SKU from OpenAI's perspective, broader goal and vision is (was) different. Look at the Claude and GLM, both were 95% committed to dev tooling: best coding models, coding harness, even their cowork is built on top of claude code
__blockcipher__about 7 hours ago
Yeah GLM’s great for coding, code review, and tool use. Not amazing at other domains.
esafakabout 7 hours ago
I use it and think its intelligence compares favorably with OpenAI and Anthropic workhorses. Its biggest weakness is its speed.
o10449366about 4 hours ago
I have the M3 Max MBP with 128 GB of memory and the 40 core GPU. What's the best local model I can run today for coding?
alx-ppvabout 3 hours ago
marsultaabout 5 hours ago
I think the benchmarks and numbers need to be easier to read. Those benchmarks are useless to the regular consumer.
XCSmeabout 4 hours ago
A bit weird to be comparing it to Opus-4.5 when 4.7 was released...
xmlyabout 3 hours ago
Very impressive!
DeathArrowabout 6 hours ago
I am trying since one week to subscribe Alibaba Coding Plan (to use Qwen 3.6 Plus) but it's always out of stock.

They brag about Qwen but don't let people use it.

dakolliabout 3 hours ago
ToKeN PrIcEs ArE gOiNg tO PluMmEt, InTelLigEnCe WiLl Be AfForDaBlE FoR EvErYOnE
souravroyetlabout 4 hours ago
I tried it asked to write it an SVG with a cat holding a guitar it wrote a pic of my gradma's look alike taking a poop. Seems alibaba has it on the spot! Lolz try it for your selves for remarkable svg's and png's!
Advertisement