FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
37% Positive
Analyzed from 2356 words in the discussion.
Trending Topics
#uber#company#bad#mistake#cost#dynamodb#something#worked#firing#years

Discussion (64 Comments)Read Original on HackerNews
This article also doesn't make a convincing case for this being a huge mistake. Companies like Uber change their architectural decisions while they scale all the time. Provided it didn't kill the company stuff like this becomes part of the story of how they got to where they are.
Related: the classic line commonly attributed to original IBM CEO Thomas John Watson Sr:
“Recently, I was asked if I was going to fire an employee who made a mistake that cost the company $600,000. No, I replied, I just spent $600,000 training him. Why would I want somebody to hire his experience?”
https://blog.4psa.com/quote-day-thomas-john-watson-sr-ibm/
One thing I did think about was how this could have been architected without sufficient reference to costs, which might have been a process or structure improvement.
Add "expected budget, double-checked by at least one other principal engineer" to the project checklist.
Have the person most responsive for the $8m "mistake" be the person to drive that cultural change, since they now have the most credibility for why it's a useful step!
I mean, if we're considering factors that could make fire a developer, suggesting, pushing and eventually failing to implement bad designs and architectures probably ranks among some of the more reasonable reasons for firing them. It doesn't seem to have been "Oops we used MariaDB when we should have used MySQL" but more like "We made a bad design decision, lets cover it up with another bad design decision" and repeat, at least judging by this part:
> So let me get this straight: DynamoDB was a bad choice because it was expensive, which is something you could have figured out in advance. You then decided to move everything to an internal data store that had been built for something else3, that was available when you decided to build on top of DynamoDB. And that internal data store wasn’t good on its own, so you had to build a streaming framework to complete the migration.
But on the other hand, I'd probably fire the manager/executive responsible for that move, rather than the individual developer who probably suggested it.
I have worked with all levels of engineers who come into a project glassy eyed about some technology, sure, but if you are part of the team approving a project and you cant produce a realistic budget then your management is bogus as hell.
I have worked on a ton of these vanity projects, and when I voice my concerns its clear nobody is out to learn anything, they are here to look good and avoid looking bad, that's about it.
Get some articles published, go to some conferences, get a new job with a new title somewhere else, laugh on your way out.
Just the framing of this question makes it seem like you simply don't like people in management / decision-makers, and you want something bad to happen to them. Maybe that's wrong, hopefully it is, but the rest of the comment doesn't do much to dissuade me of that impression either.
> Somebody Should Have Been Fired For This
This person is not a good resource. Uber was a very fast growing company, both in terms of their product and staff. Turnover in architecture happens. Calling this a catastrophe and click baiting about firing engineers over a rounding error in Uber’s overall finances is gross.
I understand this person is trying to grow their Substack with these inflammatory claims but I hope HN readers aren’t falling for it. This person’s takes are bad and they’re doing it to try to get you to become a subscriber. This is hindsight engineering from someone who wasn’t there.
The financial criticism ('napkin math') appears to estimate DynamoDB costs of USD $8 million for 2017 to 2020. Uber revenue for the same period is roughly USD $42.5 billion, thus this cost weighs in at about 0.02%, or 1/50th of one percent. This is a rounding error for a high growth company, and not something that warrants a witch-hunt and firing. It's easy to blow more than $2 million per year on software engineers in pursuit of an alternative high-scalability solution.
I'm also not on board with the 'resume driven development' criticism as the explanation for solution churn. Perhaps that is actually what happened. I wasn't there and don't know, but if that is being asserted I expect to see evidence presented to support it.
People forget how quickly Uber scaled, and the user impact of not being able to track your trips could be catastrophic to retention. There's a class of tech-influencer who think they can dissect past decisions on a blog post without being in the room when the technical constraints were being laid out. This is Monday morning quaterbacking at it's most grotesque.
I wiped out VAT on all orders and for the next month the paper invoices were sent without VAT. So the invoice is $100, VAT is $20, the invoice should be $120, but they were sent as $100.
100s of invoices every day would be my guess.
Nobody noticed.
For a month.
Millions of dollars of revenue and IIRC millions of dollars of VAT.
Until a customer complained to the CEO.
We had a firefight to fix it, not just technical but legal and managerial. We can send a new invoice just for tax. We can redo the invoice. We can send a debit memo. What is the right decision? But what if customers does not pay? What about returns? How will we track returns? Of course we were doing the technical solutions and the client company was front-ending how to handle it business wise.
And the managerial firefight - who did it, what are the safeguards in future? We had a company exec visit the client site to manage the issue.
I was in the hot seat but I was protected by my managers from any fallout. Just do the work. Do not screw up again. (Test every row every column even if you did not change it)
A month later the sales director at the client company got fired.
The grapevine is that this was just the tipping point, but you never know. BTW these were paper invoices printed onsite and mailed out, but I do not know if someone had the job to scrutinize them.
PS: True story, going by old memory, although such legends remain fresh in your mind, forever. Not sure it belongs here, but the mention of firing for a multi-million dollar mistake pulled this into cache memory.
At least when I worked at Uber, that wasn't really how it worked. The eng org was so big that it was nearly impossible to track all the projects people worked on, and you'd get micro-ecosystems of tools because of it.
Some grew large, others stayed quite "local".
Hindsight is 20/20. Not saying they did the right thing, but they may have had specific performance reasons for originally going with DynamoDB.
If you don’t have price controls, it’s easy to run up a bill.
If no single person had the responsibility to check the cost, then no one actually failed at their assigned job. So you either fix the system or fire everyone involved in the decision.
What you’re doing now is looking a scapegoat to beat up. You’re angry and you’re going to make someone pay for pissing you off.
Oh, it was $8m over several years? That's a couple of projects that didn't pan out, or a small team that wasn't firing on all cylinders for a stretch.
Nobody got fired because there was nothing unusual to fire anyone for.
> At Uber’s scale, DynamoDB became expensive. Hence, we started keeping only 12 weeks of data (i.e., hot data) in DynamoDB and started using Uber’s blobstore, TerraBlob, for older data (i.e., cold data). TerraBlob is similar to AWS S3. For a long-term solution, we wanted to use LSG.
Honest question. Why do people go for this kind of complicated solution? Wouldn't Postgres work? Let's say each trip creates 10 ledger entries. Let's say those are 10 transactions. So 150 million transactions in a day. That's like 2000 TPS. Postgres can handle that, can't it?
If regional replication or global availability is the problem, I've to ask. Why does it matter? For something so critical like ledger, does it hurt to make the user wait a few 100 milliseconds if that means you can have a simple and robust ledger service?
I honestly want to know what others think about this.
It’s usually because executive management bakes hyper growth into the assumptions because they really want the biz to grow, then it becomes marching orders down the chain.
“We need to design this for 1b DAUs”
Then 1) that growth never happens and 2) you end up with a super complicated solution
Instead, someone needs to say, “Hey [boss], are you sure we need to build for 1b DAUs? Why don’t we build for 50m first, then make sure it’s extensible enough to keep improving with growth”
> But nobody was optimizing for cost. They were optimizing for their next promotion. Each rewrite was a new proposal, a new design doc, a new system to put on a resume. The incentive was never to pick the boring, correct choice — it was to pick the complex, impressive one.
...I guess it could be possible nobody thought about cost at all, and this was all misaligned incentives and resume-driven development, but I find that kind of hard to believe? As someone who has made cost mistakes in the cloud, this claim seems a bit silly.
Not to detract from his experience, but I didn't actually see much payments experience on his resume, so I'm curious why he's branding himself as a payments guru. Kind of tech content creation fluff, I guess.
I mean, given how quickly things can change I think the language and sentiment here isn't quite right, it's just how businesses can change and we can't necessarily control that.
Outside of that, it sounds like the system worked perfectly. They launched, they paid DB costs (the 8M was not a ledger mistake) and then they rebuilt after they wanted more cost savings. Also a bunch of folks got promoted.
The 8M came from VCs lighting money on fire. Honestly this seems like the system worked as planned to me, not a case study in how not to do things.