ZH version is available. Content is displayed in original English for accuracy.
As the use of AI tools in production is becoming more common, sadly so will the high profile incidents like the one mentioned.
Fewshell is a terminal agent specifically designed to avoid this.
There is no setting to enable command auto-approval. This is by-design, so that the user never has to second-guess or worry about accidentally having it enabled.
Originally my intention was to build an AI mobile terminal to make typing shell commands easy. But with so many mobile-enabled 'claw' agents being available, I decided to make Fewshell the opposite of an autonomous agent.
Please star if you like, let me know what you think. Happy to answer questions.
About me: I'm an ex Amazon Sr. SDE for Alexa AI, and currently am working in AI safety research for agentic RLVR. I use this tool to run and check on my lab experiments.

Discussion (3 Comments)Read Original on HackerNews
LLM, like fire is a powerful tool. Some people play with fire and achieved great things, some play with fire and got burned. A number of them achieved great things and got burned. We need to understand that and learn from our mistakes.
Instead, wrap the agent in a way so it cannot destroy stuff in the first place. And if you still want it to "be able to destroy databases in production", do so by copy-pasting stuff out of the isolated environment. I've run codex as root, as "dangerously as possible" with zero approvals, since the launch of the TUI, and never hit a snag, because the agent literally don't have access to snag things up.
Agents WILL make mistakes, it's up to you to set things up in a way that you don't get utterly fucked when that eventually happens. Avoiding adding 10s of MCPs tools, avoiding authenticating with all platforms, services and databases and not giving it access to all directories on your computer solves 99% of the issues people are having, and there are numerous of simple ways of doing this.