ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
0% Positive
Analyzed from 420 words in the discussion.
Trending Topics
#each#pass#mistakes#context#whether#things#errors#interesting#every#semantic

Discussion (9 Comments)Read Original on HackerNews
"Semantic ablation" is my favorite term for it: https://www.theregister.com/software/2026/02/16/semantic-abl...
You can somewhat mitigate this, at the same moment you ask for the new edit, by adding new info or specifying the lost meaning you want to add back. But other things will still get washed out.
Nuances will drift, sharp corners will be ablated. You're doing a Xerox copy of your latest Xerox copy, so even if you add your comments with a sharpie, anything that was there right before will be slightly blurrier in the next version.
It would be interesting to know if the stronger results on Python are not just an artefact of the Python-specific evaluation, if they carry over to other common general-purpose languages, and if they are driven by something specific in the training processes.
I have yet to find a model that does not make mistakes each turn. I suspect that this kind of error is fundamentally incorrigible.
The most interesting thing about LLMs is that despite the above (and its non-determinism) they're still useful.
I've also had them convert to markdown something like an excel formatted document. It worked pretty well as long as I was examining the output. But the longer it ran in context, the more likely it was to try in slip things in that seemed related but wasn't part of the break down.
The only way I've found to mitigate some of it is to make every file a small-purpose built doc. This way you can definitely use git to revert changes but also limit the damage every time they touch them to the small context.
Anyone who thinks they're a genius creating docs or updating them isnt actually reading the output.