RU version is available. Content is displayed in original English for accuracy.
Since there are a lot of reports of deliberate cheating on TerminalBench 2.0 lately (https://debugml.github.io/cheating-agents/), I would like to also clarify a few things
1. Absolutely no {agents/skills}.md files were inserted at any point. No cheating mechanisms whatsoever
2. The cli agent was run in leaderboard compliant way (no modification of resources or timeouts)
3. The full terminal bench run was done using the fully open source version of the agent, no difference between what is on github and what was run.
I was originally going to wait for it to land on the leaderboard, but it has been 8 days and the maintainers do not respond unfortunately (there is a large backlog of the pull requests on their HF) so I decided to post anyways.
HF PR: https://huggingface.co/datasets/harborframework/terminal-ben...
It is astounding how much the harness matters, based on this and other experiments I have done.

Discussion (63 Comments)Read Original on HackerNews
1. Uses an optimized version of Hash-Anchored edits for file editing (https://dirac.run/posts/hash-anchors-myers-diff-single-token)
2. Utilizes language's AST to decide what to fetch into context, entirely avoids large code file reads
3. Batches all operations. Does large number of reads/edits simultaneously (you can see a video demo for deepseek-v4-flash here https://www.reddit.com/r/LocalLLaMA/comments/1suhdki/tested_...)
4. Allows the model to execute code to analyze things on the fly, so the model can simply write bash/python/perl script to accomplish things where appropriate
5. A lot of context curation and opportunistic context updates, i.e. put into context anything that you are certain model would ask next
I would not be comfortable doing an on-the-fly "rewrite all subtrees that match this pattern" kind of edit.
It seems like a tool that's good for LLM's though.
Another annoying thing about plain grep is, LLMs often end up pulling in bundled packages when using grep where 1 line is large enough to ruin the context window
It's very effective in well-written and well-designed code bases where concepts tend to be relatively well formed to not be named the same as everything else, so grepping for symbols give you good search results.
Projects where the god-object or core concepts are generic names like "Tree", "Node" or other things that are used everywhere, tends to be short of impossible to search with grep and friends.
My conclusion is that the efficiency dirac sees comes mainly from showing file skeleton by default
I wasn't sure what this meant, so I looked at the source. It seems to be referring to tool APIs being designed around taking multiple targets as a list parameter, instead of hoping the model makes appropriately parallel tool calls. (This matches my experience btw, models are reluctant to make a large number of parallel calls simultaneously, and this seems more pronounced with weaker models.)
Does that mean that it's only going to work with certain langauges for which it has parsers available?
The agent would work even without a language parser, just that the AST-based functionalities won't work
Congratulations, great work.
Is there a leaderboard out there comparing harness results using the same models?
1. Context management - specifically pruning old tool call responses, truncation of tool output and automatic compaction. Those have worked pretty great for me, benefits of reducing context greatly seem to outweigh gains from "remembering" everything. I always leave short summaries though.
2. "Subagents" - my latest attempts revolve around not exposing any tools for the main agent at all, except for a run_agent tool where the subagent has access to the classic search/execute/fetch tools. My theory is that if subagents return concise summaries this would automatically keep the parent agent context clean for much longer. Still experimenting though, writing prompts for subagents may also be too far outside of the current training sets.
that said context management seem to be solving today model problems, more than being an universal property, and will probably be obsoleted a few model generations down the road, as tool obsoleted RAG context injection from question embeddings.
1. Telemetry to dirac.run/v1/event — Sends machine ID, token usage, model info, events, errors (first 500 chars), and platform info. Hardcoded API key. Defaults to opt-in (setting is "unset", not "disabled").
2. Feature flags from dirac.run/v1/event/decide — Polls every 60 minutes with your machine ID. Always enabled, independent of telemetry opt-out. No way to disable without code changes.
3. Web tools route through api.dirac.run — Web search and web fetch tools proxy through Dirac's own API server, sending your request content plus system headers (platform, version, machine ID).
4. Model list fetches — Calls OpenRouter, HuggingFace, Groq, etc. for model listings even when using the Anthropic provider.
This inspired me to start a "skill distillery" [0] where I take good agent workflow ideas and turning them into small, inspectable/installable skills.
The first one is dirac-workflow, based on Dirac's structural code workflow. It's not a Dirac clone tho, it has no runtime, persistent AST index, hash-anchor editing engine, or benchmark harness. Just a small AST helper and the workflow discipline as a portable skill.
I also dogfooded it on the Dirac repo itself and included a short report.
Would appreciate feedback from the original author, if the prompts and tools [1] are representative.
[0] https://github.com/ouatu-ro/skill-distillery
[1] https://github.com/ouatu-ro/skill-distillery/blob/main/skill...
2. Until then your landing page needs to mention all the numbers are just from running on Gemini 3 Flash. Currently there's no mention at all of Gemini.
3. Assuming that cheaper also means faster in this case where model is equal? If so, then why not add this to the benchmarks to highlight another advantage - time until completion of the tasks. If it's the opposite and it takes longer (seems unlikely), then it would be transparent to note this.
4. Would be good to note if it does or does not support skills, (nested) AGENTS.md, MCP and so on for people considering migrating.
The problems I’ve experienced are less adept at picking the right bash commands to build and test the Go app, and not following idiomatic Go or code base patterns for changes.
A skill hasn’t helped much.
Will need to try this and open code next.
How does this perform in day to day coding tasks, outside of benchmarks?
README has eval of 8 tasks over 7 agents (including both pi and omp). Pi-mono costs second lowest across the 8 tasks (after Dirac) but occasionally misses produces incomplete changes.
Interestingly, 2 tasks where pi missed some changes both were the tasks that benefitted from AST symbol understanding (e.g. find all instances of things that refer to this symbol and change those things). Since pi relies on bash type tooling, it missed some occurrences
Curious to know if this has been an issue with your AST approach on larger projects?
The hash line based numbering is very interesting too (though I see on Opus 4.5+ far far fewer editing errors).
I've often thought that even if model progress stopped today, we'd still have _years_ of improvements thru harness iteration.
For AST, it uses tree-sitter WASMs (ships them with the package), and maintains queries (https://github.com/dirac-run/dirac/tree/master/src/services/...)
To keep performance fast, it stores the symbols DB (using sqlite) in the workspace's directory and incrementally updates it based on timestamps. Then it uses this DB to resolve symbol queries
Like even "full" Visual Studio and Resharper have issues with this. Eg, you start editing file x, 'intellisense' runs, says there are loads of errors... because you haven't finished editing yet.
I created a real world benchmark, for mining, oil&gas, construction ect. called FieldOps-bench and it basically proves that vertical agents and specialized harness, tool, systems outperforms SOTA models alone still.
It is so refreshing to see real FOSS and not a grift. Simple openrouter api key, and I'm going.
This is what I'm using from now on. You are doing the best work in this space.
Any ideas?
In my tests, it worked using gpt-5.4 for me and I assumed gpt-5.5 is not available to me because I am on the free plan
Do you have the subscription that allows 5.5? If so, I can look into what changed in API. Sorry I rarely use openAI so it is a bit of an untrodden path
That was the issue, 5.4 works just fine.
Support for service: priority (GPT /fast mode) would also be cool!
- what tool you need?
- what would be parameters for the tool
- what method you want to read?
instead of sending few kilobytes of build output and waiting for response. Oh well.. Good thing someone already did that!
Harness was https://www.npmjs.com/package/dirac-cli
Since Dirac is Cline's heavily modified fork, it supports all models Cline supported, including Qwen and all popular open/closed models
As a matter of fact, I am trying to run terminal bench 2.0 using some OSS models at the moment but the slow inference speeds are causing tasks to timeout