Back to News
Advertisement
yyzhong94 about 4 hours ago 26 commentsRead Article on github.com

HI version is available. Content is displayed in original English for accuracy.

Hi HN — we built Broccoli, an open-source harness for taking coding tasks from Linear, running them in isolated cloud sandboxes, and opening PRs for a human to review.

We’re a small team, and our main company supplies voice data. But we kept running into the same problem with coding agents. We’d have a feature request, a refactor, a bug, and some internal tooling work all happening at once, and managing that through local agent sessions meant a lot of context switching, worktree juggling, and laptops left open just so tasks could keep running.

So we built Broccoli. Each task gets its own cloud sandbox to be executed end to end independently. Broccoli checks out the repo, uses the context in the ticket, works through an implementation, runs tests and review loops, and opens a PR for someone on the team to inspect.

Over the last four weeks, 100% of the PRs from non-developers are shipped via Broccoli, which is a safer and more efficient route. For developers on the team, this share is around 60%. More complicated features require more back and forth design with Codex / Claude Code and get shipped manually using the same set of skills locally.

Our implementation uses:

1. Webhook deployment: GCP 2. Sandbox: GCP or Blaxel 3. Project management: Linear 4. Code hosting & CI/CD: Github

Repo: https://github.com/besimple-oss/broccoli

We believe that if you should invest in your own coding harness if coding is an essential part of your business. That’s why we decided to open-source it as an alternative to all the cloud coding agents out there. Would love to hear your feedback on this!

Advertisement

⚡ Community Insights

Discussion Sentiment

81% Positive

Analyzed from 668 words in the discussion.

Trending Topics

#codex#harness#code#using#linear#openai#anthropic#cloud#context#claude

Discussion (26 Comments)Read Original on HackerNews

dennisy10 minutes ago
Fair play for launching this, it looks like a neat project.

However I feel it will be an uphill battle competing with OpenAI and Anthropic, I doubt your harness can be better since they see so much traffic through theirs.

So this is for those who care about the harness running on their own infra? Not sure why anyone would since the LLM call means you are sending your code to the lab anyway.

Sorry I don’t want to sound negative, I am just trying to understand the market for this.

Good luck!

yzhong948 minutes ago
We are not trying to compete with OpenAI and Anthropic! We open source it because there's interest from other startups.

Teams would use Anthropic and OpenAI, but they shouldn't just use Anthropic or OpenAI. We see much better results from calling the models independently and do adversarial review and response.

This doesn't replace your need for the models, but you certainly don't need to rely on any of the cloud agent solutions out there that call these models underneath the hood.

Almured24 minutes ago
It's interesting that you’re using Linear tickets as the primary context source. From my experience so far, one of the biggest issues with coding agents is context drift. Ticket says one thing, but the codebase has changed since it was written. How did you solve? fresh RAG pass or use something like ctags to map the repo before it starts the implementation, or does it rely entirely on the LLM's provided context window?
ppeetteerr29 minutes ago
How does this compare to using Claude Web with connectors to build the same feature?

On a separate note, READMEs written by AI are unpleasant to read. It would be great if they were written by a human for humans.

yzhong9423 minutes ago
The main difference is that you have full control over this!
ayjze33 minutes ago
this is exactly what I was looking for! can't wait to try it out
yzhong9432 minutes ago
let us know if you have any feedback!
throwaway7783about 2 hours ago
Cool! We have a similar setup,connected to JIRA, but it stops at analysis and approach to solution. I'm taking inspiration from this now to take it to the next level!
yzhong94about 2 hours ago
I'd pay special attention to the harness that goes from plan to execute. We spent a lot of time ensuring this can produce high quality code that we feel good about in production instead of AI slop.

As for Jira, would love it if you contribute that integration to us! Someone asked for it in this thread :D

throwaway77838 minutes ago
Yeah. We also use gitlab instead of github. I'll check this out later. We also have set it up to work with multiple repos to truly understand context (we have frontend, backend, some tooling etc, an MCP server etc all in different repos).
yzhong946 minutes ago
We also have a multi-repo setup, to trigger it you can just tag two repos in the Linear label!
sinansakaabout 2 hours ago
nice work! I built a similar system at my previous company. It was built on top of github. agent was triggered by the created issue, run in actions, save state in PR as hidden markdown.

It worked great but time to first token was slow and multi repo PRs took very long to create (30+ mins)

Now im working on my standalone implementation for cloud native agents

yzhong94about 2 hours ago
Why was the time to first token slow? Was it because of the spin up time for containers? That was an issue for us when we were running on Google's Cloud Run. We switched to Blaxel and it's much faster now. The hibernate feature has been great for comment iteration.
dbmikusabout 3 hours ago
Like the detailed setup instructions in the readme!

Also agree that teams should invest in their own harness (or maybe pedantically, build a system on top of harness likes Claude Code, Codex, Pi, or OpenCode)

yzhong94about 3 hours ago
Yes! Broccoli is triggering Codex CLI and Claude Code CLI.
deauxabout 2 hours ago
Does that mean you're using API pricing rather than subscription? Seems like itd get expensive very quickly for a small team.
yzhong94about 2 hours ago
It's a bit of trade-off. If we spin up a new container every time (which we do when we were using Google Cloud Run), we had to pay API pricing. However, with Blaxel, we can set containers to hibernate which also gives us the ability to use subscription
orliesaurusabout 2 hours ago
I use the Codex integration in Linear, can you tell me more about the differences please?
yzhong94about 2 hours ago
Tell me more about your workflow! For us, the workflow is, we'd assign the ticket to a bot user we create (broccoli in this case), and broccoli will go spin up a sandbox and do the execution. Do you trigger the task execution from Codex by giving it a linear ID? That was Broccoli v0 but of course still requires you to setup Codex with all the right keys.
orliesaurusabout 1 hour ago
They say it better than me: https://linear.app/integrations/codex
yzhong94about 1 hour ago
Oh got it! In this case, the main difference is that we go through a flow from design to implement using our own prompts, and uses both Codex and Claude Code so they can improve off of each other.
Jayakumarkabout 3 hours ago
Thanks for making it open source, Jira Support would be good
yzhong94about 3 hours ago
Good point! Adding that to our list of to-dos - we don't use Jira but I guess it's still very popular!