ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
73% Positive
Analyzed from 914 words in the discussion.
Trending Topics
#mozilla#pretty#don#code#https#marketing#bugs#useful#guess#model

Discussion (19 Comments)Read Original on HackerNews
If we just stick with c/c++ systems, pretty much every big enough project has a backlog of thousands of these things. Either simple like compiler warnings for uninitialized values or fancier tool verified off-by-one write errors that aren’t exploitable in practice. There are many real bad things in there, but they’re hidden in the backlog waiting for someone to triage them all.
Most orgs just look at that backlog and just accept it. It takes a pretty big $$$ investment to solve.
I would like to see someone do a big deep dive in the coming weeks.
I think this has been pushed too hard, along with general exhaustion at people insisting that AI is eating everything and the moon these claims are getting kind of farcical.
Are LLMs useful to find bugs, maybe? Reading the system card, I guess if you run the source code through the model a 10,000 times, some useful stuff falls out. Is this worth it? I have no idea anymore.
But you might also get a lot of non-useful stuff which you'll need to sort out.
So much that i don't really visit anymore after 15 years of use.
It's a bizarre situation with billions in marketing and PR, astroturfing and torrents of fake news with streams of comments beneath them with zero skepticism and an almost horrifying worship of these billion dollar companies.
Something completely flipped here at some point, i don't know if it's because YC is also heavily pro these companies, and embedded with them, requiring YC applicants to slop code their way in, then cheering about it.
Either way it's incredibly sad and remind me of the worst casino economy, nft's, crypto, web3 while there's actually an interesting core, regex on steroids with planning aspects, but it's constantly oversold.
I say that as a daily user of Claude Max for over a year.
I'd appreciate some sort of disclaimer at the start of each article whether it's AI written/assisted or not. But I guess authors understand that it will diminish the perceived value of their work/part.
I think the fact this is even a conversation is pretty impressive.
Like the Black formatter for Python code on VSCode, that runs before hitting CTRL+S.
> Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
That was for GPT-2 https://openai.com/index/better-language-models/
> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT‑2 along with sampling code .
7 years later, these concerns seem pretty legit.
The absolute best case is at we end up with similar situation to modern cryptography, which is clearly in favor of defenders. One can imagine a world where a defender can run a codebase review for $X compute and patch all the low-hanging fruit, to the point where anything that remains for an attacker would cost $X*100000 (or some other large multiplier) to discover.
[0] https://red.anthropic.com/2026/mythos-preview/
If this isn't a sign of a bubble, where marketing is more important than the actual product, I don't know what is. This industry has completely lost the plot.