Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

75% Positive

Analyzed from 539 words in the discussion.

Trending Topics

#harness#meta#need#loop#traces#become#fashionable#agents#context#tokens

Discussion (13 Comments)Read Original on HackerNews

mccoyb•about 2 hours ago
It has now become fashionable to dress oneself in the garb of science to sell dev environments ... for agents.

It has now become fashionable to claim much, and furnish little.

It has now become fashionable to fail to understand or state the core of your proposal in as few words as possible: instead of "genetic algorithm applied to the space of harnesses, parallelized by our infrastructure" we get "Three swaps. Same orchestrator. Same dashboard. The wiring is the thing."

We're cooked chat.

adamgold7•about 2 hours ago
we need better RL
love2read•about 2 hours ago
I have no idea what this does or is. I really wish they could have given a better description of why this is useful.
antiobli•about 2 hours ago
Their lines "A meta-harness is the loop that improves the harness automatically" and "the bottleneck is diagnostic context: most optimizers compress prior runs into summary statistics, while meta-harness gives the proposer up to 10M tokens of raw execution traces to grep through," seem good, no?

Have to dig into the code, but it looks like they have sound engineering around a "self-improving" agentic coding harness. Will be fun to take the code for a spin.

kingstnap•about 2 hours ago
10M tokens of raw execution traces to grep through is slop. The tasks are fizzbuzz, palindrome, list reversal, and sum-even. The palindrome challenge is literaly this:

> Is the word "racecar" a palindrome? Answer with exactly one lowercase word: "yes" or "no". Print only the answer.

cyanydeez•about 2 hours ago
I find it fascinating, all these attempts are goldmining LLMs with a harness and it's clear they're generating all the docs for AI to read and use, even the docs say "we made a MCP for this!" like some how within 2 years people no longer make choices and it's just like AIs roaming the internet trying on harnesses, etc; certainly that'd be a fascinating reality but the verbosity really is a eye-glazing experience. Who do they expect to read all of that ad copy? It's not me.
vmg12•about 2 hours ago
This is not how I've seen the term meta-harness be used. The common usage I've seen has been for a meta-harness to be a wrapper around an existing agent to give that agent a new ui or abilities.
visarga•about 1 hour ago
I did this too, ablating all the components in my coding agent harness. The insight from my meta-optimization loops was "have judge agents review the plan and implementation".

One of my own insights here is that you need to collect not just execution traces, but all the human-in-the-loop nudges and steering commands. They are one of the purest sources of feedback on coding agents when seen in context.

I agree with OP on the need to collect traces and compare them, not just scores. It is a much richer source of feedback.

If anyone is interested I have a slide deck about my approach: https://horiacristescu.github.io/claude-playbook-plugin/docs...

m3kw9•about 2 hours ago
This seems to be another over optimization for AI that many are trying to get into. The LLM's improve, and your setup is deprecated, you wasted time optimizing for a slight edge. TDLR: You trade time for slight edge.
zozo123-IB•31 minutes ago
i don't disagree, though harness engineering is a real discipline that even the best AI labs put their brightest minds on, and the loop itself doesn't deprecate when models improve.
cyanydeez•about 2 hours ago
serious question: I've already got a opencode harness running on a local model. It's easily installable via the insecure bash command. It's already tailored with a couple of plugins and with a proper TODO.md and planning, I can get it to loop fine with proper attention to its pratfalls on vague/non-determinant language. It's all running on a AMD 395+ Qwen3-Coder-Next model with ~256k context. opencode has a webui I can put behind a password protected endpoint and keep it busy from anywhere I need to via a simple nginx proxy.

How does this go above and beyond this straightforward opensource, open weights and relatively cheap setup? Do you just get more tokens from SOTA models? Can anyone rationally say the products of token production are quality and secure?

pohl•about 2 hours ago
You know how OpenCode can be prompted to modify itself when you want to improve it in some way? This just automates that.