FR version is available. Content is displayed in original English for accuracy.
Vercel April 2026 security incident - https://news.ycombinator.com/item?id=47824463 - April 2026 (485 comments)
A Roblox cheat and one AI tool brought down Vercel's platform - https://news.ycombinator.com/item?id=47844431 - April 2026 (145 comments)

Discussion (61 Comments)Read Original on HackerNews
The only way to defend against these types of issues is to encrypt your environment with your own keys, with secrets possibly baked into source as there are no other facilities to separate them. An attacker would need to not only read the environments but also download the compiled functions and find the decryption keys.
It is not ideal but it could work as a workaround.
please don't suggest this. The right way is to have the creds fetched from a vault, which is programmed to release the creds auth-free to your VM (with machine level identify managed by the parent platform)
This is how Google Secrets or AWS Vaults work.
Attributed without evidence from what I could tell. So it doesn't reveal much at all.
Designing for provider-side compromise is very hard because that's the whole point of trust...
Do any marketplaces have a good approach here? I know Cloudflare, after their similar Salesloft issue, has proposed proxying all 3rd party OAuth and API traffic through them. But that feels a little bit like trading one threat vector for another.
Other than standard good practices like narrow scopes, shorter expirations, maybe OAuth Client secret rotation, etc, I'm not sure what else can be done. Maybe allowlisting IP addresses that the requests associated with a given client can come from?
OAuth 2.1[0] (an RFC that has been around longer than I've been at my employer) recommends some protections around refresh tokens, either making them sender constrained (tied to the client application by public/private key cryptography) or one-time use with revocation if it is used multiple times.
This is recommended for public clients, but I think makes sense for all clients.
The first option is more difficult to implement, but is similar to the IP address solution you suggest. More robust though.
The second option would have made this attack more difficult because the refresh token held by the legit client, context.ai, would have stopped working, presumably triggering someone to look into why and wonder if the tokens had been stolen.
0: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1
> The CEO publicly attributed the attacker's unusual velocity to AI
> questions about detection-to-disclosure latency in platform breaches
Typical! The main failures in my mind are:
1. A user account with far too much privileges - possible many others like them
2. No or limited 2FA or any form of ZeroTrust architecture
3. Bad cyber security hygiene
Or is it the UI sensitive that they ask you in CLI, that would be crazy. That means if you decide to not mark them as sensitive they don’t store encrypted ???
By far the biggest issue is being able to access the production environment of millions of customers from a Google Workspace. Only a handful of Vercel employees should be able to do that with 2FA if not 3FA.
Also worth checking your Google Workspace OAuth authorizations. Admin Console > Security > API Controls > Third-party app access. Guarantee there are apps in there you authorized for a demo two years ago that are still sitting with full email/drive access.
I get it, it's a big story ... but that doesn't mean it needs N different articles describing the same thing (where N > 1).
Would guess that double digit percent of readers have some level of skin in the game with Vercel
"Why do people use Vercel?"
"Because it's cheap* and easy."
*expensive
in fact, the sparse details had Barbara warming up her vocal chords
Unusual velocity? Didn't the attacker have the oauth keys for months?
I don't see how its necessarily relevant to this attack though. These guys were storing creds in clear and assuming actors within their network were "safe", weren't they?
My point is sensitive secrets should literally never be exported into the process environment, they should be pulled directly into application memory from a file or secrets manager.
It would still be a bad compromise either way, but you have a fighting chance of limiting the blast radius if you aren't serving secrets to attackers on an env platter, which could be the first three characters they type once establishing access.