DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
38% Positive
Analyzed from 3661 words in the discussion.
Trending Topics
#cost#worked#uber#bad#years#company#don#firing#decisions#mistake

Discussion (67 Comments)Read Original on HackerNews
This article also doesn't make a convincing case for this being a huge mistake. Companies like Uber change their architectural decisions while they scale all the time. Provided it didn't kill the company stuff like this becomes part of the story of how they got to where they are.
Related: the classic line commonly attributed to original IBM CEO Thomas John Watson Sr:
“Recently, I was asked if I was going to fire an employee who made a mistake that cost the company $600,000. No, I replied, I just spent $600,000 training him. Why would I want somebody to hire his experience?”
https://blog.4psa.com/quote-day-thomas-john-watson-sr-ibm/
I have been in situations where I was told “don’t worry about cost just get it done”. Then a few years later the business constraints shift and now we need to “worry about the cost”. It ignores that decisions made under a different set of constraints were correct, or at least reasonable, at the time but things change.
One of my pet peeves is when people say “do it right the first time” but the definition of “right” often changes over time. If the only major flaw of this design was that it was expensive; then I am much more skeptical that it was wrong given the original set of conditions that they were operating under.
Here's how a big tech reporting chain sees this situation when everything is smooth sailing: "We're growing 3x year-over-year? After 2 years, the cost will be an order of magnitude higher no matter what solution we pick. The constant factor doesn't matter that much. But we have such an incredible roadmap that we will book more than an order of magnitude of revenue, backed by this new ledger project. The cost will always be a nonissue because of growth."
And then 2 years go by, and this incredible product growth adds a bunch of ledger entries that weren't there 2 years ago, someone nudges your reporting chain with the question, "this is pretty expensive.. what gives?" and then someone with a good combination of social and technical skills points out that a migration to your existing storage solution would be a cost effective way to continue growing.
At every step of the way, everyone is generally happy with what's going on.
Easy to say, but it's a real human cost to relying on people to figure out what you mean rather than explaining what you mean. Not enough time is spent on cultivating effective communication and training. Everyone wants everything done yesterday and don't feel like investing in their own people.
Birmingham spent almost ÂŁ150m for a system that didn't work at all:
https://www.theregister.com/2026/01/29/birmingham_oracle_lat...
While I was an undergraduate, my university also spent ÂŁ9m on accounting that didn't work, also with Oracle: http://news.bbc.co.uk/1/hi/education/1634558.stm
If you've designed a system in house for your accounting, it works, makes neither financial nor software errors, is accepted by the users, and got away with it costing a relatively small fraction of your turnover? That's a big win.
One thing I did think about was how this could have been architected without sufficient reference to costs, which might have been a process or structure improvement.
Add "expected budget, double-checked by at least one other principal engineer" to the project checklist.
Have the person most responsive for the $8m "mistake" be the person to drive that cultural change, since they now have the most credibility for why it's a useful step!
I have worked with all levels of engineers who come into a project glassy eyed about some technology, sure, but if you are part of the team approving a project and you cant produce a realistic budget then your management is bogus as hell.
I have worked on a ton of these vanity projects, and when I voice my concerns its clear nobody is out to learn anything, they are here to look good and avoid looking bad, that's about it.
Get some articles published, go to some conferences, get a new job with a new title somewhere else, laugh on your way out.
Just the framing of this question makes it seem like you simply don't like people in management / decision-makers, and you want something bad to happen to them. Maybe that's wrong, hopefully it is, but the rest of the comment doesn't do much to dissuade me of that impression either.
I have worked with many hard working and caring managers, and they are generally eclipsed by said social climbers presenting at conferences every other week about know-nothing topics jumping from place to place leaving bankrupt companies and massive layoffs in their wake.
I see them posting on LI right now :)
Interns wouldn’t even be allowed to use $100K VNAs without a lot of supervision because so many things can go wrong. Damaging one of those small precision connectors is easy to do and can be a costly repair that brings delays to the lab, and that’s before you even start making measurements.
I wonder if part of the offense was that the intern was breaking protocol by moving the equipment. Alternatively they probably failed to explain the rules and expectations to the intern. Or maybe some lazy engineer tried to pawn off their work on to an intern without thinking about the consequence.
Blame-free post-mortems are for me and mine, everyone else can get fucked.
I mean, if we're considering factors that could make fire a developer, suggesting, pushing and eventually failing to implement bad designs and architectures probably ranks among some of the more reasonable reasons for firing them. It doesn't seem to have been "Oops we used MariaDB when we should have used MySQL" but more like "We made a bad design decision, lets cover it up with another bad design decision" and repeat, at least judging by this part:
> So let me get this straight: DynamoDB was a bad choice because it was expensive, which is something you could have figured out in advance. You then decided to move everything to an internal data store that had been built for something else3, that was available when you decided to build on top of DynamoDB. And that internal data store wasn’t good on its own, so you had to build a streaming framework to complete the migration.
But on the other hand, I'd probably fire the manager/executive responsible for that move, rather than the individual developer who probably suggested it.
And you just teached all your workers to be as cautious as being freezed, never be proactive, keep the status quo as much as they can, avoid being noticed, and never take a step without being forced or having someone else to take 100% blame (with paper trail) if things go south.
She told me she kept it there because her job was to make decisions and get fired or leave if she was wrong. She was right about so many of her choices, I would have followed her into anything. Then one day I came in and her desk was empty -- she had an apparently epic argument with the C suite and disagreed with their path so she left (never found out if that was a quit or fired). The team got a new VP, but I requested to be moved to a different team as I wasn't aligned with the new vision.
When you get to a certain level part of your job becomes owning the decisions and getting fired.
I once worked in a manufacturing environment where mistakes could be quite expensive. We had our annual org survey and one of the questions asked was "Risk taking is encouraged." Our team scored low on that metric, and upper management was concerned. They held a meeting to ask about it, and most of the team was confused why there was a meeting. They said they viewed it as a positive that they don't take risks.
Firing people making bad choices, people tend to appreciate that. Firing people making good choices? Yeah, I'd understand that would freeze people and make them avoid making proactive choices, try to not do that obviously.
> Somebody Should Have Been Fired For This
This person is not a good resource. Uber was a very fast growing company, both in terms of their product and staff. Turnover in architecture happens. Calling this a catastrophe and click baiting about firing engineers over a rounding error in Uber’s overall finances is gross.
I understand this person is trying to grow their Substack with these inflammatory claims but I hope HN readers aren’t falling for it. This person’s takes are bad and they’re doing it to try to get you to become a subscriber. This is hindsight engineering from someone who wasn’t there.
The financial criticism ('napkin math') appears to estimate DynamoDB costs of USD $8 million for 2017 to 2020. Uber revenue for the same period is roughly USD $42.5 billion, thus this cost weighs in at about 0.02%, or 1/50th of one percent. This is a rounding error for a high growth company, and not something that warrants a witch-hunt and firing. It's easy to blow more than $2 million per year on software engineers in pursuit of an alternative high-scalability solution.
I'm also not on board with the 'resume driven development' criticism as the explanation for solution churn. Perhaps that is actually what happened. I wasn't there and don't know, but if that is being asserted I expect to see evidence presented to support it.
People forget how quickly Uber scaled, and the user impact of not being able to track your trips could be catastrophic to retention. There's a class of tech-influencer who think they can dissect past decisions on a blog post without being in the room when the technical constraints were being laid out. This is Monday morning quaterbacking at it's most grotesque.
The cost they are laying out are not that prohibitively expensive. I’ve known corporations where people spin up test clusters that cost 5K a month and forget about it. A business critical service can definitely ignore costs in the short term if they bring in customers. The standard practice is to just ship something quickly and optimize for the cost later if it helps bring in revenue/customers.
Besides, the napkin math isn’t always true. If you’re an enterprise customer for AWS, you get massive discounts, especially in the time frame they’re talking about. And when it comes to partnerships, I remember back in the day AWS used to let you do pretty much anything for free if it meant they could parade your project to other customers.
I wiped out VAT on all orders and for the next month the paper invoices were sent without VAT. So the invoice is $100, VAT is $20, the invoice should be $120, but they were sent as $100.
100s of invoices every day would be my guess.
Nobody noticed.
For a month.
Millions of dollars of revenue and IIRC millions of dollars of VAT.
Until a customer complained to the CEO.
We had a firefight to fix it, not just technical but legal and managerial. We can send a new invoice just for tax. We can redo the invoice. We can send a debit memo. What is the right decision? But what if customers does not pay? What about returns? How will we track returns? Of course we were doing the technical solutions and the client company was front-ending how to handle it business wise.
And the managerial firefight - who did it, what are the safeguards in future? We had a company exec visit the client site to manage the issue.
I was in the hot seat but I was protected by my managers from any fallout. Just do the work. Do not screw up again. (Test every row every column even if you did not change it)
A month later the sales director at the client company got fired.
The grapevine is that this was just the tipping point, but you never know. BTW these were paper invoices printed onsite and mailed out, but I do not know if someone had the job to scrutinize them.
PS: True story, going by old memory, although such legends remain fresh in your mind, forever. Not sure it belongs here, but the mention of firing for a multi-million dollar mistake pulled this into cache memory.
At least when I worked at Uber, that wasn't really how it worked. The eng org was so big that it was nearly impossible to track all the projects people worked on, and you'd get micro-ecosystems of tools because of it.
Some grew large, others stayed quite "local".
Hindsight is 20/20. Not saying they did the right thing, but they may have had specific performance reasons for originally going with DynamoDB.
If you don’t have price controls, it’s easy to run up a bill.
If no single person had the responsibility to check the cost, then no one actually failed at their assigned job. So you either fix the system or fire everyone involved in the decision.
What you’re doing now is looking a scapegoat to beat up. You’re angry and you’re going to make someone pay for pissing you off.
Oh, it was $8m over several years? That's a couple of projects that didn't pan out, or a small team that wasn't firing on all cylinders for a stretch.
Nobody got fired because there was nothing unusual to fire anyone for.
> At Uber’s scale, DynamoDB became expensive. Hence, we started keeping only 12 weeks of data (i.e., hot data) in DynamoDB and started using Uber’s blobstore, TerraBlob, for older data (i.e., cold data). TerraBlob is similar to AWS S3. For a long-term solution, we wanted to use LSG.
Honest question. Why do people go for this kind of complicated solution? Wouldn't Postgres work? Let's say each trip creates 10 ledger entries. Let's say those are 10 transactions. So 150 million transactions in a day. That's like 2000 TPS. Postgres can handle that, can't it?
If regional replication or global availability is the problem, I've to ask. Why does it matter? For something so critical like ledger, does it hurt to make the user wait a few 100 milliseconds if that means you can have a simple and robust ledger service?
I honestly want to know what others think about this.
It’s usually because executive management bakes hyper growth into the assumptions because they really want the biz to grow, then it becomes marching orders down the chain.
“We need to design this for 1b DAUs”
Then 1) that growth never happens and 2) you end up with a super complicated solution
Instead, someone needs to say, “Hey [boss], are you sure we need to build for 1b DAUs? Why don’t we build for 50m first, then make sure it’s extensible enough to keep improving with growth”
Most people don't get meaningful raises at existing jobs so if they want raises, they must job hop or internally job switch.
Companies will layoff at drop of the hat so you have to make sure your skillset is up to date so you can get next job.
So everyone is launching big splashy projects so they put on their resume to protect themselves in case of layoffs or turn into a promotion.
Here, the tell is you’re not gonna get a multibillion dollar company on hockey stick growth to switch storage because you want to get promoted.
> But nobody was optimizing for cost. They were optimizing for their next promotion. Each rewrite was a new proposal, a new design doc, a new system to put on a resume. The incentive was never to pick the boring, correct choice — it was to pick the complex, impressive one.
...I guess it could be possible nobody thought about cost at all, and this was all misaligned incentives and resume-driven development, but I find that kind of hard to believe? As someone who has made cost mistakes in the cloud, this claim seems a bit silly.
Not to detract from his experience, but I didn't actually see much payments experience at all on his resume, so I'm curious why he's branding himself as a payments guru. Kind of tech content creation fluff, I guess.
I mean, given how quickly things can change I think the language and sentiment here isn't quite right, it's just how businesses can change and we can't necessarily control that.
Outside of that, it sounds like the system worked perfectly. They launched, they paid DB costs (the 8M was not a ledger mistake) and then they rebuilt after they wanted more cost savings. Also a bunch of folks got promoted.
The 8M came from VCs lighting money on fire. Honestly this seems like the system worked as planned to me, not a case study in how not to do things.