Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 399 words in the discussion.

Trending Topics

#more#agent#agents#process#project#looks#similar#review#don#step

Discussion (11 Comments)Read Original on HackerNews

philipp-gayretabout 3 hours ago
Interesting project. I am working on a similar solution. Eventually you will run into the following with harnesses, so I wonder how these questions work with your project;

1) Can you define a process other than build -> review -> .. etc. And more importantly, can you define a process that is more complex? For example for each review finding, do X. Or go from end-to-end test, back to build.

2) In your setup, how does a sub-agent prove undeniably, that it's work is complete? Does the "lead" agent just look at the output? If so, it would effectively make the lead an implicit reviewer for all agents, so I don't follow why you would need a review step.

3) Can you have steps in between these agentic processes that do not involve agents?

Fiahilabout 3 hours ago
Not Op.

For 1), yes, there is an "observe" step in the process where - when the project is deployed - it observes and reconciles what happens vs what should happen based on specs.

I believe more variant are bound to emerge when harnesses become more prevalent. We only scratched the surface, so don't generalize over the process yet.

elysianfieldsabout 1 hour ago
This looks really cool. Did you think about including automatic worktree creation + sandboxing?

I've built sth similar (more focus on the project setup and being able to work on multiple things at once with a single agent), that uses git worktrees to create a separate space (symlinks .env files) and bubblewrap to isolate the worktree for the agent.

eugeniecregan20 minutes ago
This is very cool.

We have been working on a communication layer that would be, I believe, complementing it by allowing the agents to actually talk to each other and to agents in other teams: https://github.com/awebai/aweb

mettamage8 minutes ago
I vibe coded a super simple communication layer with my agents. I'm all for it as certain things shouldn't be put in certain contexts for one.

I have a lot more roles though and it's more flexible, but also a bit slower as it isn't in full yolo mode.

arctideabout 2 hours ago
hit this exact thing running a routines hub.

When an agent is told to do something by the scheduler, the next step in the process only believes it’s done if the agent’s status is marked as ‘posted’. Statuses like ‘ready_to_post’ or ‘draft_verified_awaiting_review’… these are actually errors that the system needs to fix on the following attempt.

The trickiest part was dealing with being stopped, but not having something break. You have to have ways to say “this happened, and it isn't what we wanted”, for example, ‘blocked_quota’, ‘blocked_no_credentials’, or ‘skipped_anti_bunching’. If you don't have those, the main program will endlessly retry and spend all your money.

the typed handoff in ahk is the right primitive imo. discipline on top: agents never write half-states. every run terminates in a documented terminal status, success or otherwise.

andreypk12 minutes ago
looks interesting, starred
yshamreiabout 2 hours ago
It looks very promising! Is there any plan to implement a ralph-loop inside?
lynellfabout 2 hours ago
Looks cool, but is it really provider agnostic? I only see Claude Code and OpenCode as advertised examples.

How does this differ from RooCode and similar agent orchestration tools?