FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
75% Positive
Analyzed from 539 words in the discussion.
Trending Topics
#harness#meta#need#loop#traces#become#fashionable#agents#context#tokens

Discussion (13 Comments)Read Original on HackerNews
It has now become fashionable to claim much, and furnish little.
It has now become fashionable to fail to understand or state the core of your proposal in as few words as possible: instead of "genetic algorithm applied to the space of harnesses, parallelized by our infrastructure" we get "Three swaps. Same orchestrator. Same dashboard. The wiring is the thing."
We're cooked chat.
Have to dig into the code, but it looks like they have sound engineering around a "self-improving" agentic coding harness. Will be fun to take the code for a spin.
> Is the word "racecar" a palindrome? Answer with exactly one lowercase word: "yes" or "no". Print only the answer.
One of my own insights here is that you need to collect not just execution traces, but all the human-in-the-loop nudges and steering commands. They are one of the purest sources of feedback on coding agents when seen in context.
I agree with OP on the need to collect traces and compare them, not just scores. It is a much richer source of feedback.
If anyone is interested I have a slide deck about my approach: https://horiacristescu.github.io/claude-playbook-plugin/docs...
How does this go above and beyond this straightforward opensource, open weights and relatively cheap setup? Do you just get more tokens from SOTA models? Can anyone rationally say the products of token production are quality and secure?