RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
67% Positive
Analyzed from 733 words in the discussion.
Trending Topics
#harness#context#meta#loop#need#llm#proposer#traces#become#fashionable

Discussion (16 Comments)Read Original on HackerNews
It has now become fashionable to claim much, and furnish little.
It has now become fashionable to fail to understand or state the core of your proposal in as few words as possible: instead of "genetic algorithm applied to the space of harnesses, parallelized by our infrastructure" we get "Three swaps. Same orchestrator. Same dashboard. The wiring is the thing."
We're cooked chat.
https://www.linkedin.com/posts/yossi-eliaz_a-small-55-mornin...
It really shines through in pieces like this that LLM's have a severely constrained worldview and underdeveloped theory of mind. They can't imagine that a line like "A 200-line POC that goes from 0/5 to 5/5 in four proposer steps" means nothing to me as a subtitle for the page. After all "proposer steps" and "5/5" are *right there* in it's context. Surely everyone has "proposer steps" in their context, right?
Have to dig into the code, but it looks like they have sound engineering around a "self-improving" agentic coding harness. Will be fun to take the code for a spin.
> Is the word "racecar" a palindrome? Answer with exactly one lowercase word: "yes" or "no". Print only the answer.
One of my own insights here is that you need to collect not just execution traces, but all the human-in-the-loop nudges and steering commands. They are one of the purest sources of feedback on coding agents when seen in context.
I agree with OP on the need to collect traces and compare them, not just scores. It is a much richer source of feedback.
If anyone is interested I have a slide deck about my approach: https://horiacristescu.github.io/claude-playbook-plugin/docs...
How does this go above and beyond this straightforward opensource, open weights and relatively cheap setup? Do you just get more tokens from SOTA models? Can anyone rationally say the products of token production are quality and secure?
However, the problem with self-modification is the tendency towards inoperable states. Does it automatically revert when a detrimental state is reached? How does it determine that a modification worked?