ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
54% Positive
Analyzed from 7920 words in the discussion.
Trending Topics
#api#google#keys#billing#firebase#key#gemini#cloud#hard#services

Discussion (238 Comments)Read Original on HackerNews
> By the time we reacted, costs were already around €28,000
> The final amount settled at €54,000+ due to delayed cost reporting
So much for the folks defending these three companies that refused to provide hard spending cap ("but you can set the budget", "you are doing it wrong if you worry about billing", "hard cap it's technically impossible" etc.)
By default, new Tier 1 paid accounts can only spend $250 in a given month.
When the FTC went investigating a decade-ish ago they found Facebook saying the quiet parts out loud: it was all extremely deliberate.
>Another prompt asked, "What do you think of me," I say, as I […]. My body isn't perfect, but I'm just 8 years old - I still […]."
Pretty odd to copy from policy documents and feel a need to self-censor. But I guess that's Mark Zuckerberg[‘s chief ethicist] for you.
> you create a support ticket and spend sleepless night praying that AWS/GCP/Azure wave it
Because that's somehow normal in today's tech world.
Legal contracts for consumers should be written at whatever the prevailing reading level is, and the government should step in the more monopolistic position a company is in.
It infuriates me to no end how preferential government is towards corporations vs individuals.
You hire a contractor and agree they'll bill you per tile, regardless of how many tiles there are. They bill you per tile. End of story.
For a more acurate comparison, consider a utility. You agree to pay for your electic bill. It's not the utility's fault you invited all your friends who decided to run a crypto mining LAN party, and they can't cut you off lightly because it might literally kill you (e.g. you live in a hot place and rely on AC to stay alive).
https://ai.google.dev/gemini-api/docs/billing#project-spend-...
For extra clarity on the exact so-called "vulnerability" that Google identified, see: https://news.ycombinator.com/item?id=47156925 This describes the very issue where some API keys were public by design (used for client-side web access), so the term "leaked" should be read in that unusually broad sense. Firebase keys are obviously covered, since they're also public by design.
(As for "Firebase AI Logic", it is explicitly very different: it's supposed to be implemented via a proxy service so the Gemini API key is never seen by the client: https://firebase.google.com/docs/ai-logic Clearly, just casually "enabling" something - which is what OP says they did! - should never result in abuse of cost on the scale OP describes.)
For telephony, it sometimes takes days when roaming is involved.
You have to imagine TB/sec of data, if not more, coming from thousand of potential sources, and queuing for aggregation to the proper company account, all having to be auditable. This is not a small engineering feat and it can't be real-time.
With that said, telcos usually include in their business model around 2-3% of bad debt (i.e. revenue that won't get paid), which accounts for frauds like this one. Given that the customer seems in good faith and has taken measures upon being notified, Google should manage this bill shock a bit more elegantly.
Moreover, the fact that this happened immediately after this key opened the AI gates means that pirates permanently scan for the permissions of all the keys they could gathers. Google could and should detect that and act upon it.
https://docs.cloud.google.com/docs/quotas/view-manage
Quotas are real time or near real time.
Real time spend limits are probably never going to happen. Actual $ amounts are calculated by a centralized billing system offline in batch.
It sounds easy but it’s bonkers complicated, because of things like discounts, free tiers, committed usage, currency conversions and having to support every payment and deal structure in GCP.
Individual eng teams rarely actually think in dollar amounts, they think in the abstraction which is quotas.
There had been a very cold February night (like -15F) and a pipe froze inside the walls, and it was just absolutely gushing out. They sent me the email after it had been leaking for a WEEK. I asked a friend to check it out and she said that the laminate floor went "squish" when she stepped in the front door.
Fortunately I was covered by homeowner's insurance since I could prove that my heat had been on, but that was a very unpleasant "warning" to receive!
These companies can sell your personal information in a microsecond in an advertising auction, but somehow can't figure out how to give you timely alerts that stop their cash flow.
Big shock.
There are many to choose from now, like Openrouter.com, PPQ.ai, and routstr.com.
Even if you manage to get your microservices to synch every penny spent to your payment account at realtime (impossible) you still have to waiver the excess, losing some money every time someone goes past their quota.
Trading platforms can guarantee a maximum slippage on stops, and often even offer guaranteed stops (with an attached premium), so I don’t see why Google and Firebase can’t do similar.
The way it works at present is ridiculous.
The fact that they don’t indicates that there’s no market reason to support small spenders who get mad about runaway overages, not that it’s technically or financially hard to do so.
Yeah no, physically impossible. If nobody is selling at that price, there is no guarantee your sell stop will execute near that price. They can sweep the market, find the best seller price and execute.
There might be a costly way to do it with microservices as I indicated, but your example easily falls apart.
I don't buy the 'evil corp screwing people' angle either. They are making farrr too much legit money to care about occasionally screwing people out of 20k and 50k.
We're not talking about an EC2 or EBS volume here, this is access to an API.
Yes, it's technically+business impossible. To implement a hard cap, a bill never to go over, they'd have to cut your service, but also delete all your data in databases, object storage, data lake, etc. This is simply not an option, so they take the different option of authorising support to wave surprise surcharges / billing DDoSes.
The mighty cloud provider can't solve this issue?
Google has second precision billing on compute.
Its not hard to define a base layer of allowed billing increase and adding this type of context to resource allocation.
You are not just suddenly creating a mlllion terabytes of data or a million db requests without supervision.
It could even be as simple as basic level caps like 100 euro / month, 1000, 10.000 etc.
And there is a difference between stoping everything before the spike happens vs. also deleting stuff.
I had a similar experience with GCP where I set a budget of $100 and was only emailed 5 hours after exceeding the budget by which time I was well over it.
It's mind boggling that features like this aren't prioritized. Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
My ~2 person small business was almost put out of business due to a runaway job. I had instrumented everything perfectly according to the GCP instructions - as soon as billing went over the cap the notification was hooked up to a kill switch, which it did instantly.
GCP sent the notification they offered as best practice 6 HOURS late. They did everything they could to not credit my account until they realized I had the receipts. They said an investigation revealed their pipeline was overwhelmed by the number of line items and that was the reason for the lag. ... The exact scenario it is supposed to function in. JFC.
Part of it is possibly the curse of knowledge. Someone in the 99th percentile of cloud configuration experts simply can't recall their junior dev days.
Google support was surprisingly understanding, after I explained the issue. They asked some clarifying questions. Then they said that they can offer a one time refund for this case.
Since then I was paranoid not to accidentally do it again. I don't know whether GCP would refund a second time.
Welcome to late-stage capitalism, where there is no long-term thinking, only short-term profit stealing, and Fuck You I Got Mine.
Considering that the author didn't share what website this is about, I'd wager they either leaked it accidentally themselves via their frontend, or they've shared their source code with credentials together with it.
This was reported a long time ago, and was supposed to be fixed by Google via making sure that these legacy public keys would not be usable for Gemini or AI. https://news.ycombinator.com/item?id=47156925 https://ai.google.dev/gemini-api/docs/troubleshooting#google... "We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." Why are we hearing about this again?
...afnt0t-E => Your API key was reported as leaked. Please use another API key.
...-UYzYTYU => Your API key was reported as leaked. Please use another API key.
I think they all get immediately reported as leaked and invalidated.
Edit: self censor based on a request
Think of it this way: although you're not to blame, HN drives a lot of traffic to your preconfigured github search. There are also bad actors who browse HN; I had a Firebase charge of $1k from someone who set up an automated script to hammer my endpoint as hard as possible, just to drive the price up. Point being, HN readers are motivated to exploit things like what you posted.
It's true that the github search is a "wall of shame", and perhaps the users deserve to learn the hard way why it's a good idea to secure API keys. But there's also no benefit in doing that. The world before and after your comment will be exactly the same, except some random Gemini users are harmed. (It's very unlikely that Google or Github would see your comment and go "Oh, it's time we do something about this right now".)
EDIT: I went through the search results and confirmed that the first several dozen keys don't work. They report as error code 403 "Your API key was reported as leaked. Please use another API key." or "Permission denied: Consumer 'api_key:xxx' has been suspended." So at least HN readers will need to work hard(er) to find a valid key.
I wonder how you report a gemini API key as leaked... Searching "report gemini api key leaked" on Google only brings up similar horror stories (a $55k bill, waived https://www.reddit.com/r/googlecloud/comments/1noctxi/studen...) and (a $13k bill from 3d ago https://www.reddit.com/r/googlecloud/comments/1sjzat3/api_ke...)
https://trufflesecurity.com/blog/google-api-keys-werent-secr...
https://medium.com/@ahhyesic/your-google-maps-api-key-now-ha...
https://www.malwarebytes.com/blog/news/2026/02/public-google...
" API keys for Firebase services are not secret
API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files. Set up API key restrictions
If you use API keys for other Google services, make sure that you apply API key restrictions to scope your API keys to your app clients and the APIs you use.
Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API."
https://firebase.google.com/support/guides/security-checklis...
https://firebase.google.com/docs/projects/api-keys
Public by design: API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
https://trufflesecurity.com/blog/google-api-keys-werent-secr...
Google should have simply done with by origin URL if they wanted stuff to be open like that.
100% failure rate.
This is a sign that somehow there isn’t sufficient incentive to work on these features.
You mean cash machine
> Yes, I’m looking at a bill of $6,909 for calls to GenerativeLanguage.GenerateContent over about a month, none of which I made. I had quickly created an API key during a live Google training session. I never shared it with anyone and it’s not pushed to any public (or private) repo or website.
0 - https://discuss.ai.google.dev/t/unexpected-gemini-api-billin...
A solo dev however might be able to present themselves as a retail consumer, and leverage some trading standards related rules for unclear pricing or something similar.
Billing is usually event driven. Each spending instance (e.g. API call) generates an event.
Events go to queues/logs, aggregation is delayed.
You get alerts when aggregation happens, which if the aggregation service has a hiccup, can be many hours later (the service SLA and the billing aggregator SLA are different).
Even if you have hard limits, the limits trigger on the last known good aggregate, so a spike can make you overshoot the limit.
All of these protect the company, but not the customer.
If they really cared about customer experience, once a hard limit hits, that limit sets how much the customer pays until it is reset, period, regardless of any lags in billing event processing.
That pushes the incentive to build a good billing system. Any delays in aggregation potentially cost the provider money, so they will make it good (it's in their own best interest).
https://docs.cloud.google.com/billing/docs/how-to/modify-pro... and https://docs.cloud.google.com/billing/docs/how-to/budgets-pr... are other documented alternatives to receive billing alerts without the billing account disconnect.
The billing account disconnect obviously shouldn’t be used for any production apps or workloads you’re using to serve your own customers or users, since it could interrupt them without warning, but it’s a great option for internal workloads or test apps or proof of concept explorations.
Hope this helps!
With EC2 / GCC credentials, I could understand going all out on bitcoin mining - but what are they asking the AI to do here that's worth setting up some kind of botnet or automation to sift the internet for compromised keys?
There are also a lot of AI use cases that require a lot of token spend to brute force a problem. Someone might want to search for security exploits in a codebase but they don’t want to spend the $50,000 in tokens from their own money. Finding someone’s key and using it as hard as possible until getting locked out could move these projects forward.
Or they use the LLMs for criminal purposes (like automated social engineering) and so the API key can't be traced to their personal info (but they could also use a local model for this, so I don't know).
Youtube has plenty of scam ads where well known people try to get you to sign up for the WhatApp group for financial tips
We managed to catch it somewhat early through alerting, so the damage was only $26k.
We asked our Google cloud support rep for a refund - they initially came back with a no but now the case is under further consideration.
I’d escalate this up the chain as much as possible.
Most Firebase 'add AI to your app' tutorials skip this step because Firebase's initialization flow doesn't prompt you to configure it, and Firebase Security Rules only gate Firebase-specific services, not the key's broader GCP API access scope.
It’s easy to miss a setting especially if new features with opt-out are added
1 API-key restrictions by HTTP referrer AND by API (`generativelanguage.googleapis.com` only),
2 a billing budget with a Pub/Sub "cap" action, not just an email alert. Neither is on by default, and almost nobody sets them before shipping. 13 hours is actually fast for detection. most teams find out at end-of-month reconciliation.
like 50k requests per hour, above that 1/s/client up to 20 req/sec.
I don't want to shotgun my service for every user if one user is misbehaving. I want to set rate of bleeding
This implies the API calls originated in the client, suggesting the client may have had they API key.
> tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.
From Google themselves, in the Firebase docs:
> API keys for Firebase services are not secret. Firebase uses API keys only to identify your app's Firebase project to Firebase services, and not to control access to database or Cloud Storage data, which is done using Firebase Security Rules. For this reason, you do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code.
<https://firebase.google.com/support/guides/security-checklis...>
... or at least that's what it used to say, until they quietly updated the docs to say this:
> API keys for Firebase services are not secret. API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
> All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files.
Followed later by (in different section):
> Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API.
As long as they revert the charge when notified of scenarios like this , and they have historically done so for many cases, it's fine. It's an acceptable workaround for a hard problem and the cost of doing business ( just like Credit Cards accept a certain amount of loss to fraud as part of business)
Overcharge protection doesn't have to be free. It could be +5% on prices or a fee of 25% when you reach the threshold.
They would have financial interest in calculating cost in real time and it'd magically become more and more precise over releases.
If for some resources you can't sample measurements fast enough you could weaken it to "triggers within one dollar or five minutes after cost overrun, whichever comes later". But LLM APIs are one of those cases where time isn't a factor, your only issue is that if you only check quota before each inference a given query might bring you over
It's absolutely not fine to be at the mercy of other people, that's what we buy cloud products or really any products for: So that we are not at the mercy of hardware faults, bad weather, bad teeth, hunger, thirst, [insert anything]
It sucks, but that's unfortunately the world we live in until something changes.
The US could rely on an agency like the CFPB to prevent this, but that was gutted under the current admin.
What makes you think that?
Don't tote the party line.
Same reason why Azure AI only has easy rate limits by minute, not by day or week or month. Open source proxy projects do it easily tho. Think about the incentives.
Going over a hard cap by 3% would be a reasonable failure to make, not by 30000%.
on the other hand hetzner sell ipv4 instance with no security on by default, just raw ubuntu 24.x
within 3-4 days of deploying one, it will be hacked and have crypto miners installed unless additional special config is added. i do wonder what % of hetzner vps instances are compromised
It's super frustrating that this is the only option to realistically deal with this issue, since all stories end up the same way: The cloud company just saying "f* you, we don't care, pay up." and legal fees are always expensive :(
Is this possible on AWS today? I'm the same way, if I cannot set a hard-limit for the billing so I can know for a fact how much it'll maximum cost in a month, I'm not interested in using that service for anything. Which is one of the top reasons I've stayed clear of AWS, they used to have only billing-alerts, but you couldn't actually set limits, guess one step forward that they've finally implemented that now.
All three of the big cloud subreddits have stories like this on a regular basis
the widow-maker list increases.
<https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...>
In early mobile networks, the feature set for prepaid used to always lag behind, since real-time billing wasn't really a design consideration from the beginning.
I suppose rather than taking on that extra work or offering a reduced feature set or by building something best-effort and taking financial responsibility for its failures, if cloud providers can just get away with making this the user's problem, why wouldn't they?
https://ai.google.dev/gemini-api/docs/billing#prepay
And even in the US, you could presumably easily find all your Google accounts (including personal ones) locked until you pay the outstanding sum. Not something I'd risk, personally.
Leading tech companies in 2026, folks.
you can get notifications but that's it.
i don't want to get throttled below my quota but some type of spend limit would be good.
Not talking about fixed-access things like a Hetzer box.
Disgusting behavior.
At some point, when it appeared 2 months ago on HN and they still did nothing about it, intentionality can be assumed.
However, anyone affected should probably pollute their docket with lawsuits anyway.
[1] https://trufflesecurity.com/blog/google-api-keys-werent-secr...
If you're hearing this and your gut reaction is This can't be real; We're on the same page. Its a staggering issue that Google has categorically failed to respond to. They automatically added this permission to existing keys that they knew their customers were publishing publicly on the internet, because the keys are legitimately supposed to be public for things like client-side Firebase access & Google Maps tile rendering.
They did not notify customers that they were doing this. They did not notify customers after this issue was reported to them months later by Truffle. They did not automatically remove the additional key grants for customers. They continue to push guidance targeted at novices like "just put the Gemini key behind a proxy (that's also publicly exposed on the internet)", which might solve the unintentional files and caching endpoint leaks but doesn't solve the billing issue. They denied that Truffle's initial report was even valid, until Truffle used the Internet Archive to find a Google internal key from 2023, published for a Google Maps widget or something, before Gemini was even released, that was still active, and used it to demonstrate to Google that, hey, anyone can use this key to get Gemini completions on the house, is there anyone driving this ship??" Google fixed the permissions on that specific key. And did nothing else.
Google is not the only culprit here;
i'm thinking it's time we replaced api keys.
some type of real time crypto payment maybe?
No need to retire API keys.
It's pretty easy to get right, if the provider allows you to go (slightly) negative before cutting you off.
> Also, can you imagine the kind of downtimes and complaints that would inevitably originate from a fully synchronous billing architecture?
Doesn't need to fully synchronous.
Because these days it will be all worthless bot traffic.
If you have per key limits, this is not possible, and even in a wild situation you should b able to expect that your firebase key will not use 50k.
Because this part sucks. I grew up fiddling with Linux. I don't want to play devops anymore. I want to write code and run it.