DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
27% Positive
Analyzed from 1711 words in the discussion.
Trending Topics
#vercel#encrypted#env#don#sensitive#more#value#security#article#means

Discussion (66 Comments)Read Original on HackerNews
So sensitive doesn’t mean encrypted. It means the UI doesn’t show the dev what value’s stored there after they’ve updated it. Not sensitive means it’s still visible. And again, I presume this is only a UI thing, and both kinds are stored encrypted in the backend.
I don’t work for Vercel, but I’ve use them a bit. I’m sure there are valid reasons to dislike them, but this specific bit looks like a strawman.
Yeah, I'm very confused. It's not possible to encrypt env vars that the program needs; even if it's encrypted at rest, it needs to be decrypted anyway before starting the program. Env vars are injected as plain text. This is just how this works, nothing to do with Vercel.
This situation could some day improve with fully homomorphic encryption (so the server operates with encrypted data without ever decrypting it), but that would have very high overhead for the entire program. It's not realistic (yet)
PoC or GTFO.
I think you'll find it's a bit harder to do than you expect.
One for which the Context.ai employee needs to have their arse booted up and down the car park for.
You can blame individuals, but security is a property of the system.
If you spin up an EC2 instance with an ftp server and check the "Encrypt my EBS volume" checkbox, all those files are 'encrypted at rest', but if your ftp password is 'admin/admin', your files will be exposed in plaintext quite quickly.
Vercel's backend is of course able to decrypt them too (or else it couldn't run your app for you), and so the attacker was able to view them, and presumably some other control on the backend made it so the sensitive ones can end up in your app, but can't be seen in whatever employee-only interface the attacker was viewing.
And I thought it was bad when my son got compromised by a Roblox cheat, but they only they grabbed his Gamepass cookies and bought 4 Minecraft licenses, which MS quickly refunded...
The thing that concerns me is that even at a site like HN, where a lot of people are very familiar with LLMs, it seems to be passing.
I hate to think this will become the norm but it's not the first HN linked post that's gotten a lot of earnest engagement despite being AI generated (or partly AI generated).
I'm very comfortable with AI generated code, if the humans involved are doing due diligence, but I really dislike the idea of LLM generated prose taking over more and more of the front page.
So I believe the author has exposure to the issue and interest in understanding it, that’s more than AI alone has got.
Failed to verify your browser Code 11 Vercel Security Checkpoint, arn1::1776759703-rtDgRAtRyXvjD4IoU4RbqvkGmvQQCP7H
Gah.
If I don't see asterisks, I'm not hitting save on the field with a secret in it. Maybe they were setting them programmatically? They should definitely still be looking to pass some kind of a secret flag, though. This is a weird problem for a company like Vercel to have.
Tools that sit in the middle (like Context.ai) end up becoming a pretty large attack surface without feeling like one.
(Of course there are tons of other red flags not looked at in the article, eg. how does an employees machine get access to production systems and from there access to customers connected with oauth and how does the attacker get to env vars from a google workspace account)
Initially, we identified a limited subset of customers whose Vercel credentials were compromised. We reached out to that subset and recommended that they rotate their credentials immediately.
At this time, we do not have reason to believe that your Vercel credentials or personal data have been compromised.
We'll keep dangerous devices like the SuperBox in our homes, if it helps us get access to free movies and tv.
We'll use single-use plastics, even if we know they're bad for the environment, because they're just so damn easy.
We'll let AI run that thing for us, because it's just too easy.
A whole generation has grown up without knowing what it was like to infect your computer with AIDS trying to download an MP3, and it shows. That caution will come back, just at a terrible cost.
More generically, our species' Achilles heel is our inability to factor in the long-term cost of negative externalities when evaluating processes that yield short-term positive results.
Vercel April 2026 security incident
https://news.ycombinator.com/item?id=47824463