Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

67% Positive

Analyzed from 733 words in the discussion.

Trending Topics

#harness#context#meta#loop#need#llm#proposer#traces#become#fashionable

Discussion (16 Comments)Read Original on HackerNews

mccoybabout 2 hours ago
It has now become fashionable to dress oneself in the garb of science to sell dev environments ... for agents.

It has now become fashionable to claim much, and furnish little.

It has now become fashionable to fail to understand or state the core of your proposal in as few words as possible: instead of "genetic algorithm applied to the space of harnesses, parallelized by our infrastructure" we get "Three swaps. Same orchestrator. Same dashboard. The wiring is the thing."

We're cooked chat.

adamgold7about 2 hours ago
we need better RL
zozo123-IB26 minutes ago
love2readabout 2 hours ago
I have no idea what this does or is. I really wish they could have given a better description of why this is useful.
bglazer9 minutes ago
Yeah I have been reading a lot of posts like this lately. Technical blog post clearly written by an LLM summarizing something vibe-coded. They always start using project-specific jargon right away and they never give you enough context or backstory to understand why this thing exists. It's seems very clearly to be a symptom of someone pointing an LLM at a repo and telling it "write a github page for this project".

It really shines through in pieces like this that LLM's have a severely constrained worldview and underdeveloped theory of mind. They can't imagine that a line like "A 200-line POC that goes from 0/5 to 5/5 in four proposer steps" means nothing to me as a subtitle for the page. After all "proposer steps" and "5/5" are *right there* in it's context. Surely everyone has "proposer steps" in their context, right?

antiobliabout 2 hours ago
Their lines "A meta-harness is the loop that improves the harness automatically" and "the bottleneck is diagnostic context: most optimizers compress prior runs into summary statistics, while meta-harness gives the proposer up to 10M tokens of raw execution traces to grep through," seem good, no?

Have to dig into the code, but it looks like they have sound engineering around a "self-improving" agentic coding harness. Will be fun to take the code for a spin.

kingstnapabout 2 hours ago
10M tokens of raw execution traces to grep through is slop. The tasks are fizzbuzz, palindrome, list reversal, and sum-even. The palindrome challenge is literaly this:

> Is the word "racecar" a palindrome? Answer with exactly one lowercase word: "yes" or "no". Print only the answer.

cyanydeezabout 2 hours ago
I find it fascinating, all these attempts are goldmining LLMs with a harness and it's clear they're generating all the docs for AI to read and use, even the docs say "we made a MCP for this!" like some how within 2 years people no longer make choices and it's just like AIs roaming the internet trying on harnesses, etc; certainly that'd be a fascinating reality but the verbosity really is a eye-glazing experience. Who do they expect to read all of that ad copy? It's not me.
vmg12about 2 hours ago
This is not how I've seen the term meta-harness be used. The common usage I've seen has been for a meta-harness to be a wrapper around an existing agent to give that agent a new ui or abilities.
visargaabout 2 hours ago
I did this too, ablating all the components in my coding agent harness. The insight from my meta-optimization loops was "have judge agents review the plan and implementation".

One of my own insights here is that you need to collect not just execution traces, but all the human-in-the-loop nudges and steering commands. They are one of the purest sources of feedback on coding agents when seen in context.

I agree with OP on the need to collect traces and compare them, not just scores. It is a much richer source of feedback.

If anyone is interested I have a slide deck about my approach: https://horiacristescu.github.io/claude-playbook-plugin/docs...

m3kw9about 2 hours ago
This seems to be another over optimization for AI that many are trying to get into. The LLM's improve, and your setup is deprecated, you wasted time optimizing for a slight edge. TDLR: You trade time for slight edge.
zozo123-IB34 minutes ago
i don't disagree, though harness engineering is a real discipline that even the best AI labs put their brightest minds on, and the loop itself doesn't deprecate when models improve.
cyanydeezabout 2 hours ago
serious question: I've already got a opencode harness running on a local model. It's easily installable via the insecure bash command. It's already tailored with a couple of plugins and with a proper TODO.md and planning, I can get it to loop fine with proper attention to its pratfalls on vague/non-determinant language. It's all running on a AMD 395+ Qwen3-Coder-Next model with ~256k context. opencode has a webui I can put behind a password protected endpoint and keep it busy from anywhere I need to via a simple nginx proxy.

How does this go above and beyond this straightforward opensource, open weights and relatively cheap setup? Do you just get more tokens from SOTA models? Can anyone rationally say the products of token production are quality and secure?

pohlabout 2 hours ago
You know how OpenCode can be prompted to modify itself when you want to improve it in some way? This just automates that kind of thing.
cyanydeezabout 2 hours ago
It can't actually; I had to create a systemd service that watched the config path and send a signal to reload the files. It roughly works, but it doesn't actually do the loop correctly.

However, the problem with self-modification is the tendency towards inoperable states. Does it automatically revert when a detrimental state is reached? How does it determine that a modification worked?

pohlabout 2 hours ago
The paper shows that it can. Take note this seems to be someone’s experiment. If it’s not working for you that’s probably because it’s not a polished product.