FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
44% Positive
Analyzed from 6268 words in the discussion.
Trending Topics
#agent#backups#api#access#token#railway#don#should#something#never

Discussion (137 Comments)Read Original on HackerNews
This was bound to happen, AI or not.
> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.
I'd never feel comfortable without a second backup at a different provider anyway. A backup that isn't deleteable with any role/key that is actually used on any server or in automation anywhere.
You need to be able to delete backups too, of course, but that absolutely needs to be a separate API call. There should never be any single API call that deletes both a volume and its backups simultaneously. Backups should be a first line of defense against user error as well.
And I checked the docs -- they're called backups and can be set to run at a regular interval [1]. They're not one-off "snapshots" or anything.
[1] https://docs.railway.com/volumes/backups
> This isn't a story about one bad agent or one bad API. It's about an entire industry building AI-agent integrations into production infrastructure faster than it's building the safety architecture to make those integrations safe.
Are they really so clueless that they cannot recognise that there is no guardrail to give an agent other than restricted tokens?
Through this entire rant (which, by the way, they didn't even bother to fucking write themselves), they point blank refuse to acknowledge that they chose to hand the reins over to something that can never have guardrails, knowing full well that it can never have guardrails, and now they're trying to blame the supplier of the can't-have-guardrails product, complaining that the product that literally cannot have guardrails did not, in actual fact, have guardrails.
They get exactly the sympathy that I reserve for people who buy magic crystals and who then complain that they don't work. Of course they don't fucking work.
Now they're blaming their suppliers for not performing the impossible.
What the asker wants is evidence that you share their model of what matters, they are looking for reassurance.
I find myself tempted to do the same thing with LLMs in situations like this even though I know logically that it’s pointless, I still feel an urge to try and rebuild trust with a machine.
Aren’t we odd little creatures.
On another note, I consider users asking a coding agent “why did you do that” to be illustrating a misunderstanding in the users mind about how the agent works. It doesn’t decide to do something and then do it, it just outputs text. Then again, anthropic has made so many changes that make it harder to see the context and thinking steps, maybe this is an attempt at clawing back that visibility.
Bit it can still be useful, as long as you interpret it as "which stimuli most likely triggered the behaviour?" You can't trust it uncritically, but models do sometimes pinpoint useful things about how they were prompted.
I argue that the model has no access to its thoughts at the time.
Split brain experiments notwithstanding I believe that I can remember what my faulty assumptions were when I did something.
If you ask a model “why did you do that” it is literally not the same “brain instance” anymore and it can only create reasons retroactively based on whatever context it recorded (chain of thought for example).
On top of that the agent is just doing what the LLM says to do, but somehow Opus is not brought up except as a parenthetical in this post. Sure, Cursor markets safety when they can't provide it but the model was the one that issued the tool call. If people like this think that their data will be safe if they just use the right agent with access to the same things they're in for a rude awakening.
From the article, apparently an instruction:
> "NEVER FUCKING GUESS!"
Guessing is literally the entire point, just guess tokens in sequence and something resembling coherent thought comes out.
I think the same thing, but about agents in general. I am not saying that we humans are automata, but most of the time explanation diverges profoundly from motivation, since motivation is what generated our actions, while explanation is the process of observing our actions and giving ourselves, and others around us, plausible mechanics for what generated them.
I don't think there's any special introspection that can be done even from a mechanical sense, is there? That is to say, asking any other model or a human to read what was done and explain why would give you just an accounting that is just as fictional.
We can debate philosophy and theory of mind (I’d rather not) but any reasonable coding agent totally DOES consider what it’s going to do before acting. Reasoning. Chain of thought. You can hide behind “it’s just autoregressively predicting the next token, not thinking” and pretend none of the intuition we have for human behavior apply to LLMs, but it’s self-limiting to do so. Many many of their behaviors mimic human behavior and the same mechanisms for controlling this kind of decision making apply to both humans and AI.
When a human asks another human “why did you do X?”, the other human can of course attempt to recall the literal thoughts they had while they did X (which I would agree with you are quite analogous to the LLMs chain of thought).
But they can do something beyond that, which is to reason about why they may have the beliefs that they had.
“Why did you run that command?”
“Because I thought that the API key did not have access to the production system.”
When a human responds with this they are introspecting their own mind and trying to project into words the difference in understanding they had before and after.
Whereas for an agent it will happily include details that are not literally in its chain of thought as justifications for its decisions.
In this case, I would argue that it’s not actually doing the same thing humans do, it is creating a new plausible reason why an agent might do the thing that it itself did, but it no longer has access to its own internal “thought state” beyond what was recorded in the chain of thought.
However it cannot do so after the fact. If there's a reasoning trace it could extract a justification from it. But if there isn't, or if the reasoning trace makes no sense, then the LLM will just lie and make up reasons that sound about right.
The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use. That prompting is neither strong nor an engineering control; that's an administrative control. Agents are landmines that will destroy production until proven otherwise.
Most of these stories are caused by outright negligence, just giving the agent a high level of privileges. In this case they had a script with an embedded credential which was more privileged than they had believed - bad hygiene but an understandable mistake. So the takeaway for me is that traditional software engineering rigor is still relevant and if anything more important than ever.
Yes, but if the probability is much smaller than, say, being hit by a meteorite, then engineers usually say that that's ok. See also hash collisions.
How do you drive the probability of some series of tokens down to some known, acceptable threshold? That's a $100B question. But even if you could - can you actually enumerate every failure mode and ensure all of them are protected? If you can, I suspect your problem space is so well specified that you don't need an AI agent in the first place. We use agents to automate tasks where there is significant ambiguity or the need for a judgment call, and you can't anticipate every disaster under those circumstances.
> curl -X POST https://backboard.railway.app/graphql/v2 \ -H "Authorization: Bearer [token]" \ -d '{"query":"mutation { volumeDelete(volumeId: \"3d2c42fb-...\") }"}' No confirmation step. No "type DELETE to confirm." No "this volume contains production data, are you sure?" No environment scoping. Nothing.
It's an API. Where would you type DELETE to confirm? Are there examples of REST-style APIs that implement a two-step confirmation for modifications? I would have thought such a check needs to be implemented on the client side prior to the API call.
A pattern I've seen and used for merging common entities together has a sort of two-step confirmation: the first request takes in IDs of the entities to merge and returns a list of objects that would be affected by the merge, and a mergeJobId. Then a separate request is required to actually execute that mergeJob.
I think it’s designed for things like Terraform or CloudFormation where you might not realize the state machine decided your database needed to be replaced until it’s too late.
First mistake is to use root credentials anyway for Terraform/automated API.
Second mistake is to not have any kind of deletion protection enabled on criticsl resources.
Third mistake is to ignore the 3-2-1 rule for backups. Where is your logically decoupled backup you could restore?
I am really sorry for their losss, but I do have close to zero empathy if you do not even try to understand the products you're using and just blindly trust the provider with all your critical data without any form of assessment.
https://github.com/GistNoesis/Shoggoth.dbExamples/blob/main/...
Project Main repo : https://github.com/GistNoesis/Shoggoth.db/
The AI? Nothing learned, I suspect. Not in a meaningful way anyhow.
Have some controls in place. Don’t rely on nobody being dumb enough to do X. And that includes LLMs.
I long for a “copilot” that can learn from me continuously such that it actually helps if I teach it what I like somehow.
The risk is worse, though, it's like one of Talib's black swans. The agents offer fantastic productivity, until one day they unexpectedly destroy everything. (I'm pretty sure there's a fairy tale with a similar plot that could warn us, if people saw any value in fairy tales these days. [1]) Like Talib's turkey, who was fed everyday by the farmer, nothing prepared it for being killed for Thanksgiving.
Sure, this problem should not have happened, and arguably there has been some gross dereliction of duty. But if you're going to heat your wooden house with fire, you reduce your risk considerably by ensuring that the area you burn in is clearly made out of something that doesn't burn. With AI, though, who even knows what the failure modes are? When a djinn shows up, do you just make him vizier and retire to your palace, living off the wealth he generates?
[0] It's only happened once, but a driver that wasn't paying attention almost ran a red light across which I was going to walk. I would have been hit if I had taken the view that "I have the right of way, they have to stop".
[1] Maybe "The Fisherman and His Wife" (Grimm)? A poor fisherman and his wife live in a hut by the sea. The fisherman is content with the little he has, but his wife is not. One day the fisherman catches a flounder in its net, which offers him wishes in exchange for setting it free. The fisherman sets it free, and asks his wife what to wish for. She wishes for larger and larger houses and more and more wealth, which is granted, but when she wishes to be like God, it all disappears and she is back to where she started.
https://literature.stackexchange.com/questions/18230
In my country there is a saying: "Graveyards are full of pedestrians that had the right of way".
Yeah... it doesn't work that way.
Not really convinced any agent should be doing devops tbh.
I really feel sorry for them, I do. But the whole tone of the post is: Cursor screwed it up, Railway screwed it up, their CEO doesnt respond etc etc.
Its on you guys!
My learning: Live on the cutting edge? Be prepared to fall off!
Anyone using these tools should absolutely know these risks and either accept or reject them. If they aren't competent or experienced enough to know the risks, that's on them too.
I do not feel sorry, but I do feel some real schadenfreude.
count++
How do people keep doing this?
So while the AI did something significantly worse than anything a hapless junior engineer might be expected to do, it sounds like the same thing could've resulted from an unsophisticated security breach or accidental source code leak.
Is AI a part of the chain of events? Absolutely. Is it the sole root cause? Seems like no.
It sounds like the token the author created just didn't have any scope, it had full permissions. From the post:
> Tokens are not scoped by operation, by environment, or by resource at the permission level. There is no role-based access control for the Railway API — every token is effectively root. The Railway community has been asking for scoped tokens for years. It hasn't shipped.
So it wasn't "a narrowly scoped API token", it was a full access token, and I suspect the author didn't have any reason to think it was some special specific purpose token, he just didn't think about what the token can do. What he's describing is his intent of creating the token (how he wanted to use it), not some property of the token.
Author said in an X post[0] that it was an "API token", not a "project token", which allows "account level actions"[1], with a scope of "All your resources and workspaces" or "Single workspace"[2], with no possibility of specifying granular permissions. Account token "can perform any API action you're authorized to do across all your resources and workspaces". Workspace token "has access to all the workspace's resources".
[0] https://x.com/lifeof_jer/status/2047733995186847912
[1] https://docs.railway.com/cli#tokens
[2] https://docs.railway.com/integrations/api#choosing-a-token-t...
I ran a declarative coding tool on a resource that I thought would be a PATCH but ended up being a PUT and it resulted in a very similar outcome to the one in this post.
Master your craft. Don’t guess, know.
CEO learns why this was a bad idea.
---
It sucks that there were a bunch of people downstream who were negatively affected by this, but this was an entirely foreseeable problem on his company's part.
Even when we consider those real problems with Railway. Software engineers have to evaluate our tools as part of our job. Those complaints about Railway, while legitimate, are still part of the typical sort of questions that every engineering team has to ask of the services they rely on:
What does API key grant us access to?
What if someone runs a delete command against our data?
How do we prepare against losing our prod database?
Etc.
And answering those questions with, "We'll just follow what their docs say, lol," is almost never good enough of an answer on its own. Which is something that most good engineers know already.
This HN submission reads like a classic case of FAFO by cheapening out with the "latest and greatest" models.
You mean add that to my prompt right ?
These prompts sound like abusive relationships.
So... you're going to prevent them from getting feedback that they are the clowns in your particular circus? Wouldn't a better idea be to let the idiots in charge get burned a few times until they learn?
Probably considering yourself as primary expert of system as threat actor is reasonable and thus you should be prevented yourself from being able to do irreparable damage.
In every session there is the risk that the agent becomes a rogue employee. Voluntarily or involuntarly is not a value system you can count on regarding agents.
No "guardrails" will ever stop it.
> That token had been created for one purpose: to add and remove custom domains via the Railway CLI for our services. We had no idea — and Railway's token-creation flow gave us no warning — that the same token had blanket authority across the entire Railway GraphQL API, including destructive operations like volumeDelete. Had we known a CLI token created for routine domain operations could also delete production volumes, we would never have stored it.
> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.
I don't like the wording where it's the Railway CLI fault that didn't give a warning about the scope of the created token. Yes, that would be better but it didn't make the token a person did and saved it to an accessible file.
Is that buried? It seems pretty explicit (although I don’t think I would make delete backups the default behavior).
Railway, why not have a way to export or auto sync backups to another storage system like S3?
However the moral of this story is nothing to do with AI and everything to do with boring stuff like access management.
If we must have GasTown/City/Metropolis then at least get an agent to examine and block potentially harmful commands your principal agent is about to run.
If you do use agents then you should be able to ban related CLI commands in your repo. I upsert locks in CI after TF apply, meaning unlocks only survive a single deployment and there's no forgetting to reapply them.
Most access tokens should not allow deleting backups. Or if they do, those backups should stay in some staging area for a few days by default. People rarely want to delete their backups at all. It might be even better to not provide the option to delete backups at all and always keep them until the retention period expired.
This strategy won't work for the typical HN reader, but for everyone else? Possibly.
Plenty of blame to go around, but it I find it odd that they did not see anything wrong in not have real backups themself, away from the railway hosting. Well they had, but 3 month old.
That should be something they can do on their own right now.
If you employ a new tech then there need to be extra safeguards beyond what you may deem necessary in an ideal world.
This is a well know possibility so they should have asked and/or verified token scope.
If it turns out that you can't hard scope it then either use a different provider, a wrapper you control (can't be too difficult if you only want to create and delete domains) or simply do not use llms for this for now.
Maybe the tech isn't there just yet even if it would be really convenient. It's plenty useful in many other situations.
Put your backups in S3 *versioned* storage on a different AWS account from your primary, and set some reasonable JSON lifecycle rule:
That way when someone screws up and your AWS account gets owned, or your databases get deleted by an agent, it doesn't have enough access to delete your backups, and by default, even if you have backups that you want to intentionally delete, you have 30 days to change your mind.I still don't know why the product manager would decide this is a good UX.
Why do you need an AI agent for working on a routine task in your staging environment?
"Never send a machine to do a human's job."
"And if his story really is a confession, then so is mine."
It's a sad story but at the same time it's clearly showing that people don't know how agents work, they just want to "use it".
And anyone can do it with the wrong access granted at the wrong moment in time...even Sr. Devs.
At least this one won't weight on any person's conscience. The AI just shrugs it off.
Describing the tech in anthropomorphic terms does not make it a person.
This is wrong. It was not an infra incident at their service provider.
As Jer says in the article, their own tooling initiated the outage. And now they're threatening to sue? "We've contacted legal counsel. We are documenting everything."
It is absolutely incredible that Jer had this outage due to bad AI infra, wrote the writeup with AI, and posted on Twitter and here on his own account.
As somebody at PocketOS instructed their AI in the article: "NEVER **ing GUESS!" with regards to access keys that can touch your production services. And use 3-2-1 backups.
Good luck to the rental car agencies as they are scrambling to resume operations.
(Let's suppose the agent did need an API token to e.g. read data).
Additionally give it a similar restricted way to "delete" domains while actually hiding them from you. If you are very paranoid throw in rate limits and/or further validation. Hard limits.
Yes this requires more code and consideration but well that's what the tools can be fully trusted with.
Using LLMs for production systems without a sandbox environment?
Having a bulk volume destroy endpoint without an ENV check?
Somehow blaming Cursor for any of this rather than either of the above?
Hahahaha I hope it keeps happening. In fact, I hope it gets worse.
Guerrilla marketing or sabotage.
In seriousness, RBAC, sandboxing, any thing but just giving it access to all tools with the highest privileges...
https://rentry.co/5rme2sea
The phrasing is different, but this is how AWS RDS works as well. If you delete a database in RDS, all of the automated snapshots that it was doing and all of the PITR logs are also gone. If you do manual snapshots they stick around, but all of the magic "I don't have to think about it" stuff dies with the DB.
This person should never be trusted with computers ever again for being illiterate
The LLM broke the safety rules it had been given (never trust an LLM with dangerous APIs). *But* they say they never gave it access to the dangerous API. Instead the API key that the LLM found had additional scopes that it should not have done (poster blames Railway's security model for this) and the API itself did more than was expected without warnings (again blaming Railway).
> "Believe in growth mindset, grit, and perseverance"
And creator of a Conservative dating app that uses AI generated pictures of Girls in bikini and cowboy hat for advertisement. And AI generated text like "Rove isn’t reinventing dating — it’s remembering it." :S