Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

0% Positive

Analyzed from 420 words in the discussion.

Trending Topics

#each#pass#mistakes#context#whether#things#errors#interesting#every#semantic

Discussion (9 Comments)Read Original on HackerNews

causalabout 1 hour ago
Yeah I've been saying this for a while: AI-washing any text will degrade it, compounding with each pass.

"Semantic ablation" is my favorite term for it: https://www.theregister.com/software/2026/02/16/semantic-abl...

polskibus38 minutes ago
By „with each pass” do you mean within the same session, or with new session (context window) each time?
sebastiennight2 minutes ago
In my experience, it happens with each edit of the document, whether or not you clear the context window.

You can somewhat mitigate this, at the same moment you ask for the new edit, by adding new info or specifying the lost meaning you want to add back. But other things will still get washed out.

Nuances will drift, sharp corners will be ablated. You're doing a Xerox copy of your latest Xerox copy, so even if you add your comments with a sharpie, anything that was there right before will be slightly blurrier in the next version.

mohamedkoubaa28 minutes ago
I've been calling it meanwit reversion
jonmooreabout 1 hour ago
I really liked the evaluation method here - testing fidelity by round-tripping through chains of invertible steps. It was striking how even frontier models accumulated errors on seemingly computer-friendly tasks.

It would be interesting to know if the stronger results on Python are not just an artefact of the Python-specific evaluation, if they carry over to other common general-purpose languages, and if they are driven by something specific in the training processes.

woeirua28 minutes ago
It's an interesting paper, but I'd like to see a lot more about the types of errors that the LLM makes. Are they happening in the forward pass or the inverse pass? My guess is the inverse pass.
adampunk11 minutes ago
LLMs will make mistakes on every turn. The mistakes will have little to no apparent connection to "difficulty" or what may or may not be prevalent in the training data. They will be mistakes at all levels of operation, from planning to code writing to reporting. Whether those mistakes matter and whether you catch them is mostly up to you.

I have yet to find a model that does not make mistakes each turn. I suspect that this kind of error is fundamentally incorrigible.

The most interesting thing about LLMs is that despite the above (and its non-determinism) they're still useful.

pyrolistical5 minutes ago
As a human I make typos all the time
cyanydeezabout 1 hour ago
I played around with a local LLM to try and build a wiki like DAG. It made a lot of stupid errors from vague generic things like interpreting based on file names to not following redirects and placing the redirect response in them.

I've also had them convert to markdown something like an excel formatted document. It worked pretty well as long as I was examining the output. But the longer it ran in context, the more likely it was to try in slip things in that seemed related but wasn't part of the break down.

The only way I've found to mitigate some of it is to make every file a small-purpose built doc. This way you can definitely use git to revert changes but also limit the damage every time they touch them to the small context.

Anyone who thinks they're a genius creating docs or updating them isnt actually reading the output.