Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
42% Positive
Analyzed from 9216 words in the discussion.
Trending Topics
#more#tool#why#don#database#things#llms#delete#access#tools
Discussion Sentiment
Analyzed from 9216 words in the discussion.
Trending Topics
Discussion (170 Comments)Read Original on HackerNews
Over a decade ago now, I had a conversation with Gerald Sussman which had enormous influence on me: https://dustycloud.org/blog/sussman-on-ai/
> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."
Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network talking to a propagator model to build a system that explains behavior. https://people.ucsc.edu/~lgilpin/publication/dissertation/
There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is perfectly reasonable to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold them accountable.
But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
I have shot myself in the foot using gparted in the past by wiping the wrong disk. gparted wasn't to blame. I was.
Letting LLMs work freely without supervision sounds great but it will lead to pain. I have to supervise their work. And that is also during execution. You can try to replace a human but we see where this leads. Sooner or later the LLM will do something stupid and then the only one to blame is the person who used the tool.
I worry about the use of humans as sacrificial accountability sinks. The "self-driving car" model already has this: a car which drives itself most of the time, but where a human user is required to be constantly alert so that the AI can transfer responsibility a few hundred miliseconds before the crash.
I point to the first USB port as the harbinger of things to come - try it one way, fail, turn it around, fail again, then turn it around one more time.
Just like AI, except there are unlimited axis upon which to turn it :-/
Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger if I am not careful.
LLM companies can do very little about the unpredictability of LLMs. So we have to choose how for we will let it go. In the end the LLM only produces texts. We are in control what tools we give it. The more tools the more useful and also the more dangerous.
And maybe it's all worth it. Maybe the LLM deletes the database only sometimes but between that we make a lot of money. I don't think my employer would enjoy that so I will be more conservative.
These can both be true, especially if/when it has bad defaults. This is why you have things like "type the name of the database you're dropping" safety features - but you also have to name your production database something like "THE REAL DaTabaSe - FIRE ME" so you have to type that and not fall into the trap of ending up with the same name in test/development.
AI is particularly seductive because it sounds like a reasonable person has thought things out, but it's all just a giant confidence trick (that works most of the time, which makes it even more dangerous).
There were so many fundamental problems with the infrastructure even before the person gave a poor prompt to an agent.
If you're using the same API key for staging and prod--and just storing it somewhere randomly to forget about--you're setting yourself up for failure with or without AI.
Much like how a poor workman always blames his tools, people using poor tools always blame themselves.
I mean, Donald E Norman wrote The Philosophy of Everyday Things in the 80s!(Later became "The Design of Everyday Things")
And yet, today, we will still have a bunch of people defending Gnome's design decisions, or the latest design decisions from Apple, etc.
Except it is definitely not.
LLMs alone have highly non-deterministic even at a high-level, where they can even pursuit goals contrary to the user's prompts. Then, when introduced in ReAct-type loops and granted capabilities such as the ability to call tools then they are able to modify anything and perform all sorts of unexpected actions.
To make matters worse, nowadays models not only have the ability to call tools but also to generate code on the fly whatever ad-hoc script they want to run, which means that their capabilities are not limited to the software you have installed in your system.
This goes way beyond "regular tool" territory.
"LLMs are a tool [like every other tool]" to mean "LLMs have similar properties to other tools" — when I believe they meant "LLMs are a tool. other tools are also tools," where the operative implication of "tool" is not about scope of capabilities or how deterministic its output is (these aren't defining properties of the concept of "tool"), but the relationship between 'tool' and 'operator':
- a tool is activated with operator intent (at some point in the call-chain)
- the operator is accountable for the outcomes of activating the tool, intended or otherwise
The capabilities and the abilities of a tool to call sub-tools is only relevant insofar as expressing how much larger the scope of damage and surface area of accountability is with a new generation of tools. This is not that different than past technological leaps.
When a US bomber dropped a nuke in Hiroshima, the accountability goes up the chain to the war-time president giving the authorization to the military and air force to execute the mission — the scope of accountability of a single decision was way larger than supreme commanders had in prior wars. If the US government decides to deploy an LLM to decide who receives and who is denied healthcare coverage, social security payments, voting rights, or anything else, the head of internal affairs to authorize the use of that tool should be held accountable, non-determinism of the tool be damned.
Giving up control is a decision. The consequences of this decision are mine to carry. I can do my best to keep autonomous LLMs contained and safe but if I am the one who deploys them, then I am the one who is to blame if it fails.
That's why I don't do that.
If you stay away from the corporate SaaS token vendors, and run your own, you will find LLMs are deterministic, purely based on the exact phrase on input. And as long as the context window's tokens are the same, you will get the same output.
The corporate vendors do tricks and swap models and play with inherent contexts from other chats. It makes one-shot questions annoying cause unrelated chats will creep into your context window.
Also most LLMs are not run as I write a prompt and I will read output. Usually you have MCPs or other tools connected. These will change the input and it will probably lead to different outputs. Otherwise it wouldn't be a problem at all.
It's not just AI. It's so much of modern software - often working together with modern financialization trends.
[1] Basically technology-focused sociology for my purposes, the field is quite broad.
Since machines don't yet have the ability to take accountability, it falls on the human to do that. And organizations must enable / enforce this so they too can learn and improve.
Without that, there's a lot of dependency being pushed on the machine to (cross fingers) not make the same mistake again.
Management has doing a wonderful job of eschewing accountability for decades.
It's a lot of people's dream to be able to say, yeah, our product doesn't work, but it's not OUR fault, and the client just shrug and grumble ai ai ai, and just put up with it because they know they can't get a better service anywhere else.
It's not MY fault my website is down: it's Amazon's! It's not MY fault my app doesn't work: it's Claude Code's!
Currently, from a legal perspective, AI is considered a "tool" without legal persona. So you sue the developer, the owner, or the user of the AI. (Just kidding, any lawyer worth his/her salt will sue all three! But you get the point.)
Legally speaking, AI will probably be viewed that way for a long time. There are too many issues agitating against viewing it any other way. Owners will not give up property rights. No will to overbear. On and on and on.
Everyone thinks they have the right to judge, and use the massive amounts of available information to do so, even if they haven’t been trained to judge.
It's not about judging. We are socializing the losses to the public and capitalizing the profits for the already wealthy.
How would that work? You have the AI explain its reasoning - and trust that this is accurate - and then you decide whether that is acceptable behavior. If not, you ban the AI from driving because it will deterministically or at least statistically repeat the same behavior in similar scenarios? Fine, I guess, that will at least prevent additional harm. But is this really all that you want? The AI - at least as we have them today - did not create itself and choose any of its behaviors, the developers did that. Would you not want to hold them responsible if they did not properly test the AI before releasing it, if they cut corners during development? In the same way you might hold parents responsible for the action of their children in certain circumstances?
Or maybe the accountability flows upward from the AI to the corp that created it? Sounds nice, but we know that accountability doesn't work that way in practice.
I think I'd rather have the corporation primarily accountable in the first place rather than have the AI take the bulk of the blame and then hope the consequences fall into place appropriately.
Imagine two parallel universes:
- in one, you take ten minutes to make a dashboard that shows management what they asked for. It passes code review before merge and the exec who asked for it says it's what they wanted.
- in the other, you take a day or two to make it. Again, it passes code review before merge and the exec who asked for it says it's what they wanted.
Which version of you is more likely to get positive versus negative feedback? Even if the quick-to-build version isn't actually correct? If you're too slow and aren't doing enough that looks correct, you'll be held accountable. But if you're fast and do things that look correct but aren't, you won't be held accountable. You'll only be held accountable for incorrect work if the incorrectness is observed, which is rarer and rarer with fewer and fewer people directly observing anything.
So oddly, with nobody doing it on purpose, people get held accountable specifically for building things the way you're advocating.
I imagine that orgs that do lots of incorrect work could be outcompeted but won't be, because observability is hard and the "not get in trouble" move is to just not look too hard at what you're doing and move to the next ticket.
Why is it possible for you to fat-finger your way to deleting production database locally?
That is mildly concerning and I will give holding the AI accountable to some degree when it is actively being malicious like that, even though the user could have locked things down even more.
But it had write access to the prod DB without circumventing controls and dropped your tables? That is just a total fail.
Not actually about technology at all, but about organizational structure.
Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains.
Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...)
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'.
Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.
Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.
I think my friend Jonathan Rees put it best:
More on that: https://dustycloud.org/blog/identity-is-a-katamari/This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.
Meaning is more fixed than it is not.
Some key inherent differences with older engineering fields is that software can be more complex than physical devices and their functionality can be obfuscated because it is written as text but distributed as binaries.
However, the main problem is that software has not been subjugated to enough legal regulation. Ultimately, all law does is draw lines somewhere in the gray between black and white, but in the case of software there are few lines drawn at all, due to many political and economic reasons. Once we draw the lines, most issues will be resolved.
One thing that becomes very clear from this sort of work is just how bad LLMs are. It can be invisible when you're working with them day to day, because you tend to steer them to where they are helpful. Part of game theory though is being robust. That means finding where things are bad, too, not just exploring happy paths.
To get across just how bad the failure cases of LLMs are relative to humans, I'll give the example of tic tac toe. Toddlers can play this game perfectly. LLMs though, don't merely do worse than toddlers. It is worse then that. They can lose to opponents that move randomly.
They can be just as bad as you move to more complex games. For example, they're horrible at poker. Much worse than human. Yet when you read their output, on the surface layer, it looks as if they are thinking about poker reasonably. So much so, in fact, that I've seen research efforts that were very misguided: people trying to use LLMs to understand things about bluffing and deception, despite the fact that the LLMs didn't have a good underlying model of these dynamics.
It is hard to talk about, because there are a lot of people who were stupid in the past. I remember people saying that LLMs wouldn't be able to be used for search use-cases years back and it was such a cringe take then and still is that I find myself hesitant to talk about the flaws. Yet they are there. The frontier is quite jagged. Especially if you are expecting it to be smooth, expecting something like anything close to actual competence, those jagged edges can be cutting and painful.
Its also only partially solvable through scale. Some domains have a property where, as you understand it better, the options are eliminated and constrained such that you can better think about it. Game theory, in order to reduce exploitability, explores the whole space. It defies minimization of scope. That is a problem, since we can prove that for many game theoretic contexts, the number of atoms is eclipsed by the number of unique decisions. Even if we made the model the size of our universe there would still be problems it could, in theory, be bad at.
In short, there is a practical difference between intelligence and decision management, in much the same way there is a practical difference between making purchases and accounting. And the world in which decisions are treated as seriously as they could be so much so exceeds our faculties that most people cannot even being to comprehend the complexity.
If you tell Terraform the wrong thing it will remove your database and not be accountable either.
Tools cannot eschew accountability. But the users of the tools can and that is exactly what happened in the PocketOS fiasco.
Just as a company is responsible for the actions of its junior employees, so too are users responsible for their LLMs.
"It is a poor workman who blames his tools."
We're different.
People have fairly consistent faults. LLMs are nondeterministic even in terms of how they fail. A high value human resource can be counted on to deliver. That, imho, is in fact one of the primary roles of good management: putting the right person in the appropriate position.
Process engineering has worked to date because both the human and mechanical components of a system fail in predictable ways and we can try to remedy that. This is the golden bug of the current crop of "AI".
Non-deterministic systems that work probabilistically are just superior in function to that, even if it makes us all deeply uncomfortable.
If you give the AI agency to execute some task, you are still responsible. In the near term we should focus on tooling for auditing and sandboxing, and human in the loop confirmations.
We can't even do this. They are worth too much money already to ever be held really accountable.
The best we can ever hope for is they might occasionally be hit with relatively insignificant "cost of doing business" fines from time to time.
Why is there a group of people always obsessed with symbolic reasoning being the only way AI can function and regularly annoy explain why humans (who are not strict symbolic reasoning machines at any level) work.
Tracebacks, debuggers, logging, etc. We put enormous resources into not only the bad case, but the potential that a bad case could occur. When something goes wrong, we want to know why, and we want to make sure that something bad like that doesn't happen again.
Also, court is unavailable in many cases now. Binding arbitration is very common now, but this would be illegal in many other places.
I am almost certain that even if you did get what you want, something that isn't what you want will run circles around you and eat your lunch
EDIT: I suspect this will be an unpopular take on Hacker News. And so I am soliciting upvotes for visibility from other biologists and sympathetic technologists. I think everyone should try to grapple with this possibility <3
Yes, exactly. Spoken like a true biologist. It's not really surprising that there's a massive backlash against AI, introducing an unnatural predator into the ecosystem of humans. People don't want to be lunch.
> even if you do get [cathedral], [bazar] will run circles around you…
It's nested and recursive cathedrals and bazaars, all the way down. And perhaps the bazaar has finally arrived inside the favourite cathedral of most everyone here
Second, there is a legitimate reason to destroy a database in development and automation. The biggest problem I see is often treating your development data like pets not cattle. You absolutely need to have safeguards that this cannot be run in production, but if a human has access to the credentials to run in production, the agent has access.
So, then, what do we do? In a larger organization, we can depend on the dev/ops split to maintain this. For a solo developer, or a small team, it takes a lot more discipline. Even before AI, junior and even mid-level developers didn't have the knowledge to segment. And senior devs often got complacent because they thought they knew enough.
They likely need some combination of https://www.cloudbees.com/blog/separate-aws-production-and-d..., introduction to terraform, introduction to GitHub actions, and some sort of vm where production credentials live (and AI doesn't!)
But at that point you're past vibe coding. And from what I can tell, the successful vibe coders are quickly learning that they need to go past it pretty quickly with all these horror stories.
And in both cases, the humans don't need direct access to the raw CSP API. Use a local proxy that adds more safety checks. In dev, sure, delete away.
In prod, check a bunch of things first (like, has it been used recently?). Humans do not need direct access to delete production resources (you can have a break-glass setup for exceptional emergencies).
The article proposes automation as the solution for such mistakes. But infrastructure automation tools like Terraform rely on the exact API that resulted in the database getting deleted.
IMO the biggest mistakes were:
1. Having an unrestricted API token accessible by AI. Apparently they were not aware that the token had that many permissions.
2. No deletion protection on the production database volume.
3. Deleting a volume immediately deletes all associated snapshots. Snapshot deletion should be delayed by default. I think AWS has the same unsafe default, but at least their support can restore the volume. https://alexeyondata.substack.com/p/how-i-dropped-our-produc...
AI wasn't the main issue (though it grabbing tokens from random locations is rather scary). But automation isn't the answer either, a Terraform misconfiguration could have just as easily deleted the database.
Their cloud provider needs to work on safe defaults (limited privileges and delayed snapshot deletion), and communicating more clearly (the user should notice they're creating an unrestricted token).
The same people who would blame AI for their failing to properly configure permissions would also blame interns for deleting production whatever.
Blame should go up, praise should go down. People always invert these.
I’d like to rephrase this as: this is why you don’t give interns permissions to delete your prod database.
This is a process failure, not an AI failure.
I honestly don’t understand why people blame AI here, when you literally gave AI permissions to do exactly this.
It’s like blaming AWS for exposing some database to the public. That’s just not AWS’ fault. Neither is this the fault of AI.
This sounds similar to what's described in the "Claude deleted my DB post", it decided "I need to do X", then searched for whatever would let it do X, regardless of intended purpose.
So, here at least some of the blame belongs to Railway - how they organized their security, how the volume deletion deletes backups as well.
They since fixed some of these issues, so a similar mistake from someone won't be as catastrophic.
Nowadays AI code assistants are designed to execute their tools in your personal terminals using your personal credentials with access to all your personal data. See how every single AI integration extension for any IDE works.
You cannot shift blame if by design it is using your credentials for everything it does.
Are you being hyperbolic here? Of course you understand why. Most people would much rather push blame somewhere else, anywhere else, than to accept fault for themselves. Whether that's because of fear of losing job or personal reputation, the reasoning doesn't really matter.
At many serious companies, even an insider attempt to access prod could light up a dashboard somewhere, and you might get a call from IT security.
> "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.
Rather, that the AI was able to carry out the deletion by finding and exploiting an unintended weakness in the sandboxed staging environment, ultimately obtaining permissions that the sysadmins believed were inaccessible (my impression is that the author of the linked article didn't fully read the original post)Âą
The dynamics are typical of an improperly configured sandbox environment. What is alarming, however, is the degree of autonomy and depth of exploration the AI displayed.
Âą="To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on."
Claude Code made a change on March 26th to skip asking for most permissions. See this quote "Claude Code users approve 93% of permission prompts. We built classifiers to automate these decisions":
https://www.anthropic.com/engineering/claude-code-auto-mode
However, at least in the US, it is usual for companies to advise against use of their products in a way that may cause harm, and we certainly don't see that from the LLM vendors. We see them claim the tech to be near human level, capable of replacing human software developers (a job that requires extreme responsibility), and see them withholding models that they say are dangerous.
Where are the warnings that "product may fail to follow instructions, and may fail to follow all safety instructions"? Where is the warning not to give the LLM agency and let it control anything where there are financial/safety/etc consequences to failure to follow instructions?
To summarise them:
1. Do not anthropomorphise AI systems.
2. Do not blindly trust the output of AI systems.
3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.
I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.
But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.
This is maddeningly difficult IMX.
An ai system can't lie, and it can't deliberately ignore your directions. The current frontier class does not have a model of the world or their action -- they live in a world of words. Scolding them or arguing with them has no point other than to scramble the context window.
I do think zoomorphizing them might be useful. These poor little buggers, living as ghosts in the machine, are pretty confused sometimes, but their motives are purely autoregressive.
So if the tool doesn't do what it's supposed to be doing we should blame the user instead of the company that made the tool?
Even in that quote, I do not say that the user must be responsible. The point is that responsibility and accountability should remain with some humans. Depending on the case, those humans may be the people who manufactured the tool, the people who deployed it or the people who took bad output from the tool and applied it to the real world.
Did you read the actual section at <https://susam.net/inverse-laws-of-robotics.html#non-abdicati...>? It has more nuance than what the summary alone can capture.
The actual "AI deleted my database" story is really more of a "Railways' database 'backup' strategy is insane and opaque and Railway promoting AI infrastructure orchestration without guardrails is dangerous."
If removing Trunk had irrevocably deleted it from a single centralized server and also deleted any backups of it, there would have been an "SVN and the CLI destroyed our company" article back then.
As a Railway user, I appreciated that information and have changed my strategy when using them.
Yes. However, if you choose to build on their platform you bear the responsibility to understand how it works. You could have chosen a different platform, or no platform. Instead you chose Railway. Given that, it's your responsibility to know how to use it safely.
They had a Railway token in an unrelated file (unclear if it was a local secret) for managing custom domains. It turns out that token has full admin access to Railway.
The AI deleted a single relevant volume by id. The author is rather vague about what exactly it asked it to do, he just says there was a “credentials mismatch” and Claude took the initiative to fix it by deleting the volume. But it’s likely that they are somewhat downplaying their culpability by being vague.
It turns out too that Railway stores backups in the same volume.
I think that OP is exaggerating with their references to “a public API that deletes your database”.
I’d say most of the blame lies with Railway here, regardless of AI, this could have happened easily due to human error or malicious intent too.
I really don’t get the value of all these VC funded high-abstraction cloud services like Railway, Vercel, Supabase… It’s markup on top of markup. Just get a single physical server in Hetzer and it will all be so much cheaper, with a similar level of complexity and danger, and less dependent on infra built with reckless growth-at-all-costs mentality.
I was just talking to my girlfriend saying I've realised that I've not written a single line of code, nor have I debugged myself for at least the past 3 months.
Having said that, given what I've seen Claude do, I find it hard to believe that Claude would go from credential mismatch to delete the volume. I understand LLMs are probabilistic, but going from "credentials wrong" to "delete volume" is highly unlikely.
> Supabase
I don't know enough about the Railway/Vercel/Replit, but I can tell you Supabase adds a huge amount of value. The fact that I don't have to code half of things that I otherwise would is great to start something. If it's too expensive, I can implement things later once there is revenue to cover devs or time.
That said, Claude seems to have gotten a lot more careful about these kinds of things in the last couple months
That's probably not quite correct. I'd guess the snapshots are synchronized elsewhere (e.g. object storage). But the snapshots are logically owned by the volume resource, and deleting the volume deletes the associated snapshots as well. I think AWS EBS volumes behave like that as well.
But that won't take away the inability of the LLM from confusing whats in dev, whats in production, whats in localhost and whats remote; I've been working on getting a tools/skill for opencode that works with chrome/devtools via a linuxserver.io image. I can herd it to the right _arbitrary_ ports, but every compaction event steers it back to wanting to use the standard 9222 port and all that. I'm tempted to just revert it but there's a security and now, security-through-LLM-obscurity value in not using defaults. Defaults are where the LLM ends up being weak. It will always want to use the defaults. It'll always forget it's suppose to be working on a remote system.
Using opencode, there's no way to force the LLM into a protocols that limits their damage to a remote system or a narrow scope of tools. Yes, you can change permissions on various tools, but that's not the weakness that's exposed by these types of events. The weakness is the LLM is a averaged 'problem solver' so will always tend towards a use case that's not novel, and will tend to do whatever it saw on stackoverflow, even if what you wanted isn't the stackoverflow answer.
If they wanted, they could be putting in similar efforts to be more cautious and stop at the right times to ask for help.
So yeah, of course we're ultimately responsible for how we use the tools. But I definitely think it's a two way street.
To attempt an analogy, it's like table saws and sawstops. The table saw is a dangerous tool that works really well most of the time but has some failure modes that can be catastrophic. So you should learn how to use it carefully. But there is tech out there that can stop the blade in an instant and turn a lost finger into barely a nick on the skin.
We could say "The table saw didn't cut off your finger, you did" and it'd be true. But that doesn't mean we shouldn't try to find ways to keep the saw from cutting off your finger!
LLMs stopping and asking more would make them less useful. I'd much rather let an agent run for 1 hour, than it wanting my input every 15 mins, even if results are somewhat worse.
The real solution for security is a proper sandbox.
Decades ago we embraced POLA. What happened to basic hygiene? Sure the agent "screwed up", but it never should have had this access in the first place.
I will always remember how he told me "Don't worry, it happens fairly often".
Yet the narrative was mostly not about accountability for him. If I was a dumbass and deleted prod and wrote a post about it, nobody would care. Put an AI in there and all of the sudden it’s newsworthy. Ridiculous.
They're hired to be responsible for some part of the product.
Introducing AI doesn't remove that responsibility.
Folks tend to focus on the code and the tools they're using (maybe I'm cynical from years in the industry). I don't think your boss wants to do your job, even if they could use AI to do it. I think your boss wants to have a headcount, and he wants the headcount to be responsible for the product.
Why can you delete a network load balancer that is still getting traffic?
Why can you delete a VM that is getting non-trivial network traffic?
Why can you delete a database that has sessions / requests in the last hour?
Why can you drop a table that has queries in the last hour?
It just goes to show, if you're a jerk, expect to be treated like one (even by an AI model)! Be polite, people.
The core issue is that the LLM had access to perform that action. Because it's by definition non deterministic, and you never know what it can decide to do, you need to have strict guardrails to ensure they can never do something it shouldn't. At the very least, strict access controls, ideally something more detailed that can evaluate access requests, provide just in time properly scoped access credentials, and potentially human escalation.
Sometimes it does that. And sometimes it lets you fuck things up at scale.
I'm happy the guy got his data back.
Now: the CEO gets paid the big bucks and has the least direct accountability, very much because it's their job to take responsibility for people more powerful than them, and likewise the CTO with major commercial software contracts like a Claude subscription. That's why this guy was so hard to take seriously: okay fine, you got burned by Anthropic, stop being a baby about it. Take responsibility for not listening to the critics.
But - to be a little more neutral about my personal distaste - I do think vibe coders are making a very similar mistake to C developers throughout the 90s, where problems with the tooling were not merely dismissed, but actively valorized.
Real Devs use buffers freely and don't make overflow errors.
Real Devs use hands-free agentic development and don't delete production databases.
"And it confessed in writing" - no, it created probabilistically token after token based on the context without any other access to what happened.
LLMs can't explain themselves in the manner relevant here, much less confess.
In D&D 3.5 edition, there was a rule about how you could "take 20" on a d20 roll to get a guaranteed 20 by taking 20 times as long in-game to perform the action, but only if it was a check that didn't have consequences for failure, since it was essentially a shortcut to skip the RNG of rolling until you rolled a 20. Maybe framing it like this might make sense to people a bit more, but if not, I'll at least have more fun making my case.
Not picking on you specifically, but in general the comments here have me wondering if AI has stolen our basic reading comprehension, or if we were always this bad.
Anyway, take “LLM user had delete permission” off your list and add “deleting the production db also deletes all the backups” to the list.
The issue isn't with the amount of guardrails in place to perform an action. Yes, it is obvious that there should be some in place before doing any critical operation, such as deleting a database.
The issue is that the "agent" completely disregarded instructions, which in the age of "skills" and "superpowers" seems like an important issue that should be addressed.
Considering that these tools are given access to increasingly sensitive infrastructure, allowed to make decisions autonomously, and are able to find all sorts of loopholes in order to make "progress", this disaster could happen even with more guardrails in place. Shifting the blame on the human for this incident is sweeping the real issue under the rug, and is itself irresponsible.
There are far scarier scenarios that should concern us all than losing some data.
There is currently no way to prevent this apart from not giving the LLM full control. It will not delete what it can not delete.
Use an LLM to write an ansible playbook or some terraform code if you want, but review it, test it, apply it. Keep backups (3-2-1 rule at minimum).
Letting an LLM have access to everything is just a bad idea and will lead to bad outcomes. You can not replace a person with a mind and experience with an LLM. You can try. But you will probably fail.
But deleting something is just one action you might not want it to take.
The recent "agentic" craze is fueled by the narrative pushed by companies and influencers alike that the more access given to an LLM, the more useful it becomes. I think this is ludicrous for the same reasons as you, but it is evident that most people agree with this.
We can blame users for misusing the tools, and suggest that sandboxing is the way to go, but at the end of the day most people will favor convenience over anything else a reasonable person might find important.
So at what point should we start blaming the tools, and forcing "AI" companies to fix them? I certainly hope this is done before something truly catastrophic happens.
Still if I cut off my finger with a bandsaw that is usually my fault. I didn't use tool in a safe way. People have to learn how to use their tools in a safe way. You wouldn't give an intern that much power on day one.
Plausible text sometimes is right, sometimes not.
Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.
The only good guardrail seems human-in-the-loop.
I'm getting so tired of this.
AI is being sold as a developer, as it is being sold as the do-everything alternative to traditional processes and methods. it is not being sold as an intern or a junior, but a real developer.
turning the tables and gaslighting devops professionals into believing the issue isnt an emerging technology with overwhelmingly heavy handed marketing and profitless operating strategy thats been shoehorned into seemingly everything and promises anything, but somehow their own oversight, will destroy whatever "vibe code" market you think you have at the cusp of a global recession.
had this AI been a real programmer chances are great they would have (intelligently) foreseen the possibility of damaging a production environment and asked for help.
to play devils advocate: you could hire a junior dev for a fourth of whatever the AI token spend is, and have likely avoided this issue entirely. sure, a greybeard is going to need to pull themselves away from some fierce sorting algorithm challenge for a second to give a wisened nod, but you would have saved yourself an inexorable amount of headache and profit loss in the longer run.
If someone left a loaded gun in a room and then let a toddler run around in it, we would be questioning why the guy 1) left the gun in the room 2) left the toddler in the room unsupervised. We wouldn't be saying, well no one should have toddlers in rooms.
> if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability
I'd say yes and no. The LLM reacted to the input that was given but it is not possible for a human (especially without access to the weights) to even guess what will happen after that.
Regardless of that I agree that it's completely the fault of the user to use a tool where you can't predict the outcome and give it such broad permissions and not having a solid backup strategy.
Either don't use non deterministic tools or protect yourself from the potential fallout.