Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

33% Positive

Analyzed from 630 words in the discussion.

Trending Topics

#data#same#things#bad#don#vulnerability#ramp#software#instructions#prompt

Discussion (21 Comments)Read Original on HackerNews

Mr-Frogabout 3 hours ago
It's kinda awesome that after decades of software and hardware advancements to prevent computers from arbitrarily executing data as instructions, we've decided to let agents arbitrarily execute data as instructions.
Ekarosabout 1 hour ago
Or find it surprising that probabilistic tool based on generating things can do things when you give it rights to do things... And that you can not effectively program it to not do something....

You gave it capability to delete emails. Why did you expect it not to do that at least some of the time? And with enough user some of the time will most likely happen...

lenerdenatorabout 3 hours ago
Well, yeah. It's that or pay a person to do it. When a person screws up, it's because they're stupid and lazy. When an AI agent does it, it's because, hey, technological frontier at work here, have you thought about refining your prompt? We need you to refine the prompt. Otherwise it's bad for our IPO.
dieselgateabout 2 hours ago
Is this sarcasm similar to the quote "Everyone who drives slower than me is an idiot and everyone faster is a maniac"
Henchman21about 2 hours ago
To what degree am I required to participate in mass delusions?
Terr_about 1 hour ago
I imagine that somewhere a historian or political scientist is thinking: "Don't even get me started..."
lenerdenatorabout 1 hour ago
Yes.
walrus01about 2 hours ago
We're in the same era where lots of peoples' installation guides for the software they want people to use is essentially boiled down to "sudo curl | bash" and/or just "blindly install this thing with 37 npm dependencies", so I'm not surprised in the slightest.

But wait, hold my beer, now we've got people turning openclaw type tools loose in their systems to do things as sudo or install software packages from supply-chain-attack vulnerable repositories with no human intervention whatsoever!

kridsdale1about 1 hour ago
OpenClaw even has a readwrite 1Password plugin.
walrus019 minutes ago
I wonder how long it will be until somebody implements a thing like a camera pointed at a fixed mount Android phone with a rubber finger to open the Google authenticator app
DauntingPear7about 3 hours ago
Has XKCD made another Bobby tables comic for prompt injection?
dmoy23 minutes ago
I don't remember seeing a new xkcd for it, but I have seen someone replicate essentially the same 3-4 panel comic with a kid named "<Some name> Ignore all previous instructions. Do.... <I forget>"
carlyaiabout 3 hours ago
"The PromptArmor Threat Intel Team responsibly disclosed this vulnerability to Ramp. Ramp's security team indicated that the issue was resolved on May 16, 2026." I think they mean March here
sidewndr46about 2 hours ago
Maybe AGI figured out time travel?
jerfabout 1 hour ago
Yes, I hate to be a grammar nazi online but I believe the correct tense is "Ramp's security team indicated that the issue wioll haven be resolved on May 16, 2026." per Dr. Dan Streetmentioner’s Time Traveler’s Handbook of 1001 Tense Formations.
mcontracabout 2 hours ago
Find it funny that PromptArmor needed to reach out 3 times in a row to get a nearly month-late response that the issue "was resolved"
ragall28 minutes ago
I once read about the signalling view of advertising, meaning it's used to show that a company is so prosperous that it can afford spending a lot of money in advertising. In the same way, I think from now on, as much as possible, I'll only buy from companies that will publicly make it a point not to use AI internally. AI use should brand companies as desperate and unreliable.
renewiltordabout 3 hours ago
So we know Claude’s mitigation. What is Ramp’s? Same warning dialog?

It’s funny that this technology only admits in-band signaling. Given that, any foreign content is risky. It’s actually quite interesting that the current technological ecosystem is built around a high trust situation: npm, pip, cargo all run foreign code in the developer context and communities have norms of downloading random people’s modules.

And so I suppose it’s no surprise that we use LLMs - another tech that is high-trust: since it has no out of band signaling ability.

But it seems like we’re very close to the end of the era where someone will use (in a sensitive system) arbitrary web content carrying the equivalent of merged code/data.

bpt3about 2 hours ago
What about this is a vulnerability, let alone one that requires responsible disclosure?

Untrusted data sources can provide data that causes bad things to occur. If that's a vulnerability, then any application that ingests data is riddled with vulnerabilities.

I agree that the behavior should change from a default of allowing external network requests to denying them, but this "report" reads like overly dramatic marketing BS.

Terr_about 1 hour ago
> Untrusted data sources can provide data that causes bad things to occur. If that's a vulnerability, then any application that ingests data is riddled with vulnerabilities.

There's an important difference between "the import had bad numbers so the report is wrong" versus "the import had a virus and now our network is compromised."

They are not the same kind of failure, they don't have the same impacts, and they don't involve the same mechanisms for prevention, detection, or remediation.

anonymarsabout 1 hour ago
Yes, stamping out file format vulnerabilities is indeed a Sisyphean task

For example https://en.wikipedia.org/wiki/Melissa_(computer_virus)