RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
66% Positive
Analyzed from 14308 words in the discussion.
Trending Topics
#code#more#tokens#claude#spend#cost#productivity#using#company#month

Discussion (392 Comments)Read Original on HackerNews
I genuinely challenge someone spending $5-$10k a month to demonstrate how that turns into $50-$100k in value. At a corporate level, I'd much rather hire a junior engineer who spends $100-$200/month and becomes productive then try and rationalize $100k/year in token spend.
From my experience, this happens essentially by three means:
- Level 0 (beginner users) long lived conversations: If you dont get in the habit of compressing, or otherwise manually forcing the model to summarize/checkpoint its work, you will often find people perpetually reusing the same conversation. This is especially true for _beginners_, which did not spend time curating their _base_ agent knowledge. They end up with a single meta conversation with huge context where they feel the agent is "educated", and feel like any new conversation with the agent is a loss of time because they have to re-educate it.
- Level 1 (intermediate users) heavy explicit use of subagents: Once you discover the prompt pattern of "spawn 5 subagents to analyze your solution, each analyzing a different angle, summarize their findings", it can become addictive. It's not a bad habit per se, but if you're not careful it can drastically overspend your credits.
Level 3 (expert users) extreme multitasking. Just genuinely having 10 worktrees perpetually in parallel and cycling between them in between agent responses. Again, not necessarily bad in itself, but can exponentially conse credits.
I'm pretty sure that growth is linear.
It is a giant Goodhart's law lesson
Bonus level "I have a hammer, all I see is nails": using Claude Code for random non-coding work, like dataset cleaning. It's really convenient to have a script spawning Haikus via `claude` CLI and feeding them prompts and JSON files. Money burn potential: practically unbounded, but also it's real work that the product people wanted done, so of course it has a cost associated with it. I'd be bewildered if anyone complained.
I guess I fall under level 3 (2?): I typically have 3-6 agents working simultaneously on the same feature, they each make worktrees, code, run tests and put up PR’s. I also have Github actions which scan for regressions and security issues on each PR.
It makes my development cycle extremely fast: I request a feature and just look at Github and look for changes to my human readable outputs, settle on a PR, merge, repeat.
The issue is that I am now the bottleneck in my system. I find myself working basically non-stop, because there is always more to do. (Yes I know I can automate the acceptance criteria but that turns to slop real fast)
yeah, it is bad. Human brain is not able to properly assess this amount of changes. To understand even a small change you need a lot of capacity. To understand thousands of lines - impossible.
This is pure slop pouring into prod and we can see more and more consequences of this in all big corps's products - things start to break more and more exponentially faster.
Really does it matter if a company produces something that breaks constantly or gets worse or slower.(See github) Megacorps have a wide moat and have forced out all competition or they just buy them with low interest loans.
The quality of products keeps getting worse and we can do nothing but live with it. So if that's the state of the world, why wouldn't you just push as many "features" as fast as possible. More is rewarded. Less is punished. Quality does not matter.
People have already mentioned the size/complexity of the codebase. I'm new to my team and the codebase isn't huge, but it's large enough that there are plenty of parts I have little understanding about. When I'm given a task, then yes, I definitely go to Claude and ask it to find the relevant parts of code so I can understand the existing workflow before even attempting to change it.
The downside is that I don't build expertise. But the reality is that with Claude, I can get the work done in 1 day that would take me 5 days of struggling, and if everyone is doing it, I can't be left behind. So I take the middle route - I get it done in 2-3 days instead of 1 so I can at least spend some time with the code.
Especially with AI, the rate at which code changes in our codebase is insane. So I built a tool that takes a pull request, and tells the LLM to go deep and explain to me what that pull request does. (Note: I'm not the reviewer, I just want to keep tabs on the work that is going on in the team).
And this is just the beginning. I haven't actually spent time to come up with more ways to use the LLM to help me.
My usage is similar to yours, but if I were fairly experienced with the code base, I'd do a lot more. I haven't asked, but I suspect there are people in my team who go over $1K/month.
As always, the bottleneck is proper testing and reviews.
Edit: I'll also add that for not-so-important code used within the company, I suspect most people are going full-AI with it. For my personal (non-work) code, I just let the AI code it all - the risk is usually very low (and problems are caught quickly). If someone is using the "superpowers" skill, then even for basic features you can burn lots of tokens. I usually start with 20-40K tokens and end up with 80-90K tokens when it's finished. Which means that many of the requests prior to completion were sending in close to 80K tokens. Multiply that with the number of queries, etc.
Wasteful, but if someone else is paying ...
I see this repeated by others, including coworkers. It completely ignores caching. Caching itself is complicated, but the "longer context window = more expensive" is not 100% true and you are hampering yourself if you're not taking full advantage of large context windows.
The default Claude cache expires in 5 minutes. If you take a short break to review the code, talk to someone, or do anything other than continuously interact with the session it's going to get evicted and start over.
You can opt in to a 1-hour cache at a higher rate https://platform.claude.com/docs/en/build-with-claude/prompt...
Also anecdotally, caching has just been broken at times for me. I've had active conversations where turns less than 5 minutes apart were consuming so much quota that I doubt anything was being billed at the cache rate.
Here is a blog post that shows some data - https://blog.exe.dev/expensively-quadratic. And I can confirm this is true for Claude Code - I set up a MITM capture for all Claude Code requests and graphed it.
So increasing Request Count that reuses the same prefix (which is what higher compaction thresholds do) really does lead to (substantially) higher API costs.
Is it really a 5x ROI? Where are all the apps, games, platforms, SAAS's, feature s that have been backlogged for 5 years that are all of a sudden getting done? Because I see a modest ROI, and an _awful lot_ of shovelware.
When you're new to the codebase, things that take an experienced colleague one day to do can take a newbie 3-5 days to do.
What is wasteful? If you are costing the organization $x/hr, and spend an hour saving the company $(x*0.5), you didn't save money, you wasted it.
To the company, are you spending more time being token efficient to save less money than they're paying you for the time? That's not even getting into opportunity costs.
There is some extreme wasteful spending of AI tokens out there. But trying to get below $3k/month in token costs is often of questionable value.
One example - was giving several agents different sub problems to solve in a complex ML / forecasting problem. Each agent would write + run + read a jupyter notebook. This worked ok, the notebooks would be verbose but it was fine... until one of them wrote out hundreds of thousands of rows to a cell output, creating a 500MB ipynb file. Claude tried several times to read it and it used my entire context limit.
The solution was to prescribe a better structure of doing the world (via CLI analysis scripts + folders to save research results to). But this required some planning, thought, and design work by me the operator.
When I see people spending $10k a month in tokens, I can only assume they are taking lazy hands off approaches to solving problems with the expensive hammer that is claude code. EX: have claude read all your emails every day... the lazy solution is to simply do that, but a smarter solution is to first filter the email body HTML to remove the noise.
To be fair, I do that. 2-3 times a day, in fact. Not all of my emails (the archive has ballooned to several hundred thousand messages total), but the most recent ones certainly.
My standard prompt is along the lines of "go through the last N days of my emails, identify all threads that I need to know about, action on or follow up with". N is usually a number between 2 and 5. I've specified a standing of set of rules to easily know what is likely a source of noise to aid in skipping the bot spam.
The company is charged API pricing through an enterprise contract, and I remain persistently curious how much I burn. My daily admin-related token expenses appear to fluctuate between $1 and $5. For something that saves me up to 2h of time a day I consider that a rather tolerable deal. (When I dive in to code to do refactors or deep investigations, I can spend as much as $25 a day.)
The example I was thinking of would be a vibe coder having it "read my emails every hour" only for claude to read the same 1000 emails over and over...
But that is exactly what it is sold to people to do as a panacea: consume all the data, produce insights.
Nobody is being instructed to be judicious. Everyone is being instructed to use it as much as possible for all problem areas.
The difference here is just one word in the prompt, but serves as an example of how just a little but of deliberate thought in one's prompt can yield massive efficiency in outcome.
Whats wild to see both online and at work, non engineers given vibe code like tools will quickly show their ignorance to the importance of deliberate design and need for specific instructions one learns via coding. The "missing semi colon" meme is an example of the intuition we all developed early in our coding careers.
Many people are hoping AI can build and design for them, when in reality the deliberate design choices up front are as important if not more so than before AI.
Do you think this is because the LLM owners have such a massive ROI they're trying to cover so they're actively encouraging teams not to be judicious so then you get into this vicious cycle where both the LLMs and companies are both burning through cast like crazy?
If it’s very large, especially if the tool needs to refer to documentation for a lot of custom frameworks and APIs, you often end up needing very large context windows that burn through tokens faster.
If it’s smaller or sticks with common frameworks that the model was trained on, it’s able to do a lot more with smaller context windows and token usage is way lower.
I don't use LLMs to write code (other than simple refactors and throwaway stuff) but I do use them heavily to crawl through big codebases and identify which files and functions I need to understand.
Some of the codebases I explore will burn through tokens at a rapid rate because there is so much complex code to get through. If I use the $20 Claude plan and Opus I can go through my entire 5-hour allocation in a single prompt exploring the codebase some times, and it's justified.
Other times I'm working on simple topics, even in a large codebase, and it will sip tokens because it only needs to walk a couple files to get to what it needs to answer my questions.
The monolithic codebases are easier to crawl for any problem that can't be conveniently isolated to a single microservice.
Maybe you're right but I'm aghast at how much of engineering over the last 15 years has been breaking up working monoliths to fit better within the budget of an external provider (first it was AWS). Those prices can change.
There are good reasons to use microservices but so often they're used for the wrong reasons.
A place like Google has to be so much better off just training library concepts in, given how much of the things the LLM will "instinctively" reach for are unlikely to be available. Not unlike the acclimation period what happens when someone comes in or out of a company like that, and suddenly every library and infra tool you were used to are just not available. We need a lot more searching when that happens to us, and the LLM suffers from the same context issue. The human just has all of that trained in after a 6 months, but the LLM doesn't.
They did that, there was a special version of Gemini fine-tuned on internal code. But then the main model moves so fast that it is hard to keep such fine-tunes up to date and on the latest.
I've had to get multiple codex accounts, but there was a brief period of time where I tried API usage to see how expensive it would be. In about an hour I spent $650 of credits. I had codex estimate how much I would be spending if I was doing pure API usage and it estimated around $10k/week.
For context Postgres is 1M lines of C code. It's looking like pgrust will come out as less lines of code than Postgres and at peak I was adding over 100k lines of code in a day. I would estimate it would take a team of 5 software engineers at least 3 years to get to where I got in a month with a couple Codex subscriptions.
[0] https://github.com/malisper/pgrust
Same but in regards to quotas. I'm on the 200 EUR ChatGPT plan, so presumable have the highest quota, using the "most expensive" models, on highest reasoning, in fast-mode (1.5x quota usage) and after a full day of almost exclusively doing programming with agents, I still get nowhere close to hitting my quota.
In fact, since I started using agents for coding, the only time I even got close, was when I was doing cross-platform development with the same as above, but on three computers at the same time, then I almost hit my weekly quota. But normally, I get down to ~20% of the quota but almost never below that. I don't see how I could either, I'm already doing lots of prompts and queries "for fun" basically.
Yeah, obviously, not sure why anyone would be using APIs at this point, seems bananas to spend more than 10 EUR per day when these "almost-endless" subscriptions exists.
> My completely unfounded conjecture is that OpenAI is trying to grab developers back from Claude by burning $$$$.
Unlikely, since codex TUI was launched OpenAI pretty much had every developers pocket already as the agent is miles and leagues ahead of Claude Code, pretty much from inception. No other provider comes close to ChatGPT's Pro Mode either, I don't even think it's a quota/pricing thing, have the best models and people will flock by themselves.
Edit: Just checked with ccusage and I've been doing about $450/day for the last week. A bit more than usual, but I still haven't come close to weekly limits and never hit the 5hr rate limit.
I have both of those, yet seemingly I guess I'm not setting my goal in such a way that it supports "endless inference" like that. My goals have eventually ends, and that's when I move on. Optimization sure sounds like something you can throw away a good amount of tokens/quotas on, so yeah.
The API rates and monthly plan rates are not the same.
If you're using enough to justify the 200EUR plan (instead of the 100EUR plan), your use might actually be as high as some of the API bills discussed above.
My current job basically involves trying to improve processes that themselves make heavy use of LLMs. Once you have multiple agents in parallel running multiple experiments on improving the performance of primarily LLM driven tools it's not that hard to get your token usage pretty high.
You mean deep brute-force mode of search results parsing themselves…
I don't get it.
That is exactly what they are doing, yes
Also one engineer is treating the code as assembly. I've asked some pointed questions about code in his PR and the response was "yeah, I don't know that's what the agent did".
Edit:
To everyone freaking out about the second guy. Yeah, I think being unable to answer questions about the code you're PRing is ill advised. But requirement gathering, codebase untangling, and acceptance testing are all nontrivial tasks that surround code gen. I'm a bit surprised that having random change sets slurped up into someone else's rubber stamped PR isnt the thing that people are put off by.
It’s not like AI is the first time this happened. CI/CD and extensive preflight and integration and canary testing is also a way of saving engineer time and improving throughput at the cost of latency and compute resources. This is just moving up the semantic stack.
Obviously as engineers we say “awesome more features and products!” but management says “awesome fewer engineers!” either way pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice. There’s no John Henry award.
OTOH, I try hard to provide all possibly relevant context, manually copy/paste logs to reduce context overhead, always ask to produce an implementation plan and review it before making any code changes. Yet I often feel like a dinosaur here, all coworkers who tout "LLM productivity" just type a few words in and let the agent spin for hours without any guidance.
"Their ticket" = that was AI generated. After which they will wait their AI generated PR be checked by an automated AI QA that will validate against the AI generated spec.
It feels like important metric of "corporate AI adoption" should be how effective the human in steering the AI.
IF THE HUMAN ISN'T EFFECTIVE, THE HUMAN NEEDS TO GO.
If it manages to solve the working solutions - then it's great! why would you waste your time on it?
It it fails - then it's great! you find your value by solving the ticket, which can be a great example where human can still prevail to the AI (joke: AI companies might be interested to buy such examples)
(All assuming that your time cost is pricier than token spending. Totally different story if your wage is less than token cost)
After that we use AI to translate the tasks to a more technical view.
After that we use AI to implement the tasks.
After that we use AI to review the tasks.
After that a human QA tests the tasks.
If all is good, the code is merged and lands in production.
And yes, we burn a lot of tokens but the process is very fast. It takes months instead of years.
There's also the pattern of creating an army of agents to solve problems. Human write a plan. One agent elaborates on it. Another reviews it and makes changes. Another splits it up into tasks and delegates out to multiple agents who make changes. Yet another agent reviews the changes, and on and on. All working around the clock.
- Agents that spawn other agents
- Telling agents to go look at the entire codebase or at a lot of documents constantly
- MCP/API use with a lot of noise
- Loops where the agent is running unattended.
I do think it's not really responsible use and a loop where the agent is trying to fix CI for one hour for something that would take you five minutes (for example) is absurd. But people do that.
I dont know about $10,000, but i can see hitting $1,000 pretty easily if you aren't looking at the costs.
It will try and try and try, though.
So yeah, probably the same thing people do anyway, just not compile time its now generating time.
It's not that the best performers are magical prompt engineers providing detailed instructions: They ask better questions that the LLM knows how to try to answer, and provide the specific information that the LLM would take a while finding. It's as if some people just had no "theory of mind" of the LLM, and what it can know, and others just do. It's not a living thing or anything like that, but it's still so useful to predict it, put yourself in it's shoes, so to speak. Just like you'd do with a new hire, or a random junior.
There’s your problem. You’re trying to be responsible instead of trying to burn tokens so you can have your name on top of some leaderboard for most wasteful AI users.
Whereas a good prompt will give solid leads to all the specifics needed to complete the task.
These spend rates are in part due to operating on a larger code base. Operating on a larger code base means more time searching and understanding the code, tests, test output. They are also due to going all-in on agentic coding.
It can feel painfully slow to go back to coding by hand when for a dollar you can build the same functionality in a minute. Now do this with multiple sessions and you can see where the cost goes.
> I genuinely challenge someone spending $5-$10k a month to demonstrate how that turns into $50-$100k in value.
$10k a month on tokens is just not that much when you're already making $2M per engineer. If their productivity has increased even 10% then the spend was well worth it.
Case in point, Meta made 33% more revenue this earnings report. Now you can nitpick and ask for attribution down to the dollar, but macro trends speak for themselves.
At a lot of businesses $5-10k/mo of AI spend doesnt even translate into $5-10k/mo value. Churning out code was rarely the business value bottleneck. It was convenient for everybody else to blame developers not writing code fast enough for their failures. Now they have no excuse but I doubt will own up.
I typically consume about $200/month doing this. Most of our engineers are in the $200-400 range, with a few people around $1,000.
But then there's one guy who's not only hitting $8,000, but supposedly has nearly 300,000 lines of code accepted (Note: This means he's accepted the lines of code from Claude, not that he's committed it). I can't figure out how.
2. Multiple simultaneous projects
3. Orchestration that includes handling of CI workflow
4. Active work to further improve or refine tooling
5. Experimentation producing muscle memory as experience versus code output
Even before this AI wave, it was common for me to see spinning dev environments for like $3k/month that hadn't been used in months on AWS.
I always have a few agents (2-5) doing research and working on plans in parallel. A plan is a thorough and unambiguous document describing the process to implement some feature. It contains goals, non-goals, data models, access patterns, explicit semantics, migrations, phasing, requirements, acceptance criteria, phased and final. Plans often require speculative work to formulate. Plans take hours to days to a couple of weeks to write. Humans may review the plans or derived RFCs. Chiefly AI reviews the code (multiple agents with differing prompts until a fixed point is reached between them). Tests and formal methods are meant to do heavy lifting.
In my highest volume weeks, I ship low hundreds of thousands of lines of software not counting changes to deps.
> At a corporate level, I'd much rather hire a junior engineer
Any formulation of problem sufficient for a truly junior engineer to execute is better given to an agent. The solution is cheaper, faster, and likely better. If the later doesn't hold, 10 independent solutions are still cheaper and faster than a junior engineer.
There is no longer any likely path to teaching a junior engineer the trade.
I suspicious you actually get claude to output that much usable code in a week, but maybe you do.
But I’m 100% positive that you’re not shipping even a small fraction of the amount of value that someone reading this 2 years ago would have expected from hundreds of thousands of lines of code.
I usually succeed, BTW. I spend a lot of time planning, but usually each PR is a few hundred lines, and fairly easily reviewable.
I mostly work with Python backends, though these days it might be any language (Ruby, Go, TS).
It isn't worth it.
But what do they actually do?
I keep seeing people wax poetic about the mountains and mountains of code that LLMs are dumping out but I'm yet to anywhere near a proportionate amount of actually useful new apps or features. And if anything the useful ones I do find are just more shovels for more AI. When do we get to the part where we start seeing the 10x gains from the billions of lines of code that have probably been generated at this point?
But 10x faster also gets you to market sooner. Which has value.
Most people agree big orgs regularly have dysfunctional incentives. We've seen it happen a thousand times.
Your suggestion requires we also assume a 10x faster delivery time by people spending 200$ vs 1000$ - something I've yet to witness or hear a credible account of.
So while that might be true in a small number of cases, in general its foolish to go with the "10x delivery speed" hypothesis.
this is your “problem” - you are missing the “nightly” part. on my box LLMs run 24/7 :)
Well, if your bonus depends on spending it, you'll find a way.
But if you are like me, you aggressively document and brainstorm before planning, you review that documentation with subagents, make modifications, you aggressively plan, you verify that plan with subagents,make modifications, have a large number of phases, planning again for each phase, writing tests to cover 100%, implement each phase, do intermediate and final code reviews with subagents, apply fixes, write final documentation and do all these in parallel, if you have multiple tabs in your terminal each running Claude Code for 10-12 hours a day, then $5000 per day is not much.
If you use Anthropic or Open AI subscription and you spend $1000 per month, you are not using AI much.
I'm building my own saas. I spent 6 months writing the code by hand before using Claude, and that was fine, but its much faster to give the exact specs to Claude and have 3-4 sessions working in parallel with me. When you validate changes with exact test specs there's much less correction you need to do. I always hit my weekly limit and it's far cheaper for me to use this than to hire someone and spend time onboarding them.
I've said it before: if you allow people to see how much others spent, they will try to climb up the "leaderboard".
It takes just ONE little praise for using tokens or one perk gained, and the GAME IS ON among the developers!
Agents can iterate on a problem for hours if they can see their results and be given a higher level goal to evaluate their progress toward.
When you have an agent working for minutes or hours, never wait on it. Use that time to spin up another agent.
You can also spin up several agents in parallel to attempt the same item of work and compare their results to choose which to work off for next steps, instead of rolling the dice on a single option at a time and gambling that it's better to refine that first attempt instead of retrying from the start several more times.
And if you are doing manual QA manually, you're missing out on having e.g. Codex's "Computer Use" or "Browser Use" automate your manual verification steps and collecting a report for you to review more quickly. Codex can control multiple virtual cursors simultaneously in the background without stealing focus, to parallelize this.
If you want to use up more tokens to get more done (though more outside of your control and ability to review of course), that's how.
My programming endurance is much greater now (2-3x focused hours per day), my productivity per hour is multiples higher, and I code seven days a week now because it's really exciting.
All told, I would pay for these tools as much as I would pay for full-time human programmer(s).
I'd much rather hire a junior engineer at $1.20/hour too! Can you hook me up with your contract services provider?
Obviously I know you're talking about AI costs only. But the idea of doing that analysis without looking at the salary of the person running the tool seems to be completely missing the point.
Now, sure, there are legitimate arguments to be made about efficacy and efficiency and sustainability and best practices. But, no, $100k/year absolutely doesn't need to be "justified" if it works. That's cheaper than the alternative, and markedly so.
If you're trying to say that 100k is less than 200k, you're right.
I don't see how any of that won't need to be justified. You can spend a lot of money and not get enough of a return...
You agree with me, basically.
The core point is that these very large AI bills are not actually large in context, as the pre-existing scale of expenses for software engineering are larger still and this at least promises to reduce those markedly.
To wit: argue about whether AI works[1] for software development, don't try to claim it's too expensive, it's clearly not.
[1] "Is justified" in the vernacular.
When people have no ability to understand what they are doing, they will just rerun it endlessly hoping they get something passable. When that doesn't happen they burn money.
“I’ve got 2 dozen agents churning through the backlog to build this feature that would take one agent an hour to implement.”
First , I interview people, Junior skills in manual coding dropped sharply this year. These are people who started they school manual and switched mid-course. In two years there will be no such people.
well, that will never happened anymore in this world unless we will go back to caves, especially for juniors. Junior that writes good code is already a dying unicorn.
The outcome will be ... you will hire a junior ... who will burn more tokens, and chances of mistakes with less expensive model, less tokens are even higher.
I mean even the normal people we get in interviews have no clue, like 80% are just ignorant.
I stoped an interview after 5 minutes: when i asked what ls -ahl is doing, he started telling me how he vibe/ai codes stuff and thats his workflow. Okay if you don't know the basics, guess what? everyone can replace you or at least i'm not hiring you (i only told him thats not what we are looking for and thanked him)
we are doomed :D
The bubble is an echo chamber.
> which means figuring out if the company can afford this level of productivity at scale.
If it was actually productive, then the revenue would increase and affordability wouldn't be a question.
They are extremely productive if you use them right. To the point it worries me how clever these pseudo-AI models can get in the next year.
Revenue has increased. Have you seen Meta's latest earnings? +33% revenue - in this economy.
Affordability is not a question. There is a reason companies like Meta have no issue with their engineers spending $1k/day on tokens. It's just not that much compared to how much they make per employee.
>$8 billion of net income was the result of a tax benefit the company realized in the first quarter of the year.
So exactly how much of their revenue is because of any code LLMs wrote vs. just structural tail winds?
But if all of your peers are saying LLMs are more productive, if you're building things faster than ever before, the macro picture speaks for itself.
I really don't understand their economics.
It's not like they used AI to crank out some new revenue generating piece of software, or massively reduce operating costs. In fact their operating costs rose by 35%.
Well, that’s to be expected when using AI tools becomes relevant in your performance evaluation.
Management in the age of AI is falling for the doorman fallacy wrt engineering. If lines of code were the most valuable aspect of software engineering, my front end JavaScript intern would’ve been the most valuable person in the company. https://www.jaakkoj.com/concepts/doorman-fallacy
That means nothing to them: they jump ship and find another job just like devs do. The whole industry has been musical chairs for a while.
I suspect the other tokenboard leaders are doing the same. They made the metric "token usage" (which is just a proxy for LOC) so that's what they're gonna get.
1. you sample a few to see that they are actually meaningful,
2. they go to prod and are validated without having to roll back.
Still needs to be managed. But it should be much easier for a manager to catch an engineer gaming PRs than something like AI use or lines of code.
Edit: y'all are some whiney folk, ain't ya?
And your response does not address the point being made in the comment you replied to: Many people are being evaluated by how many tokens they burn, which is about as good a metric as lines of code written.
2) Mostly, yes.
If we're trying to measure the value of adopting tool, it's probably better to measure the ROI of that tool rather than the usage % of that tool, especially when usage is basically mandated.
To directly answer your questions:
1. You're being paid to create value for the business, which "doing what they think is productive" is a proxy for. You're not being paid to use a tool a high % of the time.
2. I doesn't seem like parent even commented on the quality of the code generated. I think anyone that uses it regularly can agree that: a) the code is not useless and b) all generated code is not immediately production ready c ) AI generation of code is an accelerant for software development
1. At my level, the company is not just paying me to do a task the way they want it done, they are paying for my experience to orchestrate the best way to do it. They want an outcome, and I'm responsible for figuring out how to get to that outcome with the right balance of cost, correctness, etc. But yes, the most dystopian reality is what you said.
2. It's not useless, but the AI generated code is absolutely lower quality than what I would have written myself, but there is no desire to clean it up. Companies have always had a disastrously bad understanding of technical debt and they finally have tool they can shove down developers throats that trades even more velocity for even less quality. They're going to take that trade every single time.
At my previous company, when the thing they thought they wanted me to do (which was not the thing they actually wanted... but whatever) diverged from my values I quit. You can just do things.
> (2) Do you think all this AI generated code is useless?
Almost universally, yes. Especially in organizations that historically haven't been particularly careful about hiring and have a huge number of young, inexperienced people. There are exceptions but they're rare enough that throwing that particular baby out with the bathwater isn't a big loss.
This is the thing that boggles my mind. They spent their budget. They have 4 months of data. What do they have to show for it?
I'm not a hater; I'm not a luddite. I have a $200 Max plan and I use it.
But are you saying that Uber made this tool available, urged everybody to use it, and is confused about what happens when it worked? It's one thing if they decide AI isn't productive enough to be worth the cost.
Are they out of ideas on what to build next, or something?
My guess is nothing you can see right now, since it likely takes a lot longer for any substantial external-facing changes to roll out broadly. Internally I'm sure several features have moved faster. I've noticed this at Salesforce where it certainly seems like things that would have taken a few weeks take a few days now. This doesn't translate directly to more money, just more potential to make money.
Well, what is there for Uber to build next? They have their ride hailing platform. It works. They have adapted it for other kinds of delivery (food, groceries, "anything that fits in a car") What else is there in the "someone driving a car" space for them?
There is a lot of things to do in some driving a vehicle space . The other obvious business (that they exited) is self driving of course .
Or ask engineers to justify the spend?
Why should we spend that many tokens, what will that get us in return?
If this was AWS we'd all be pointing and going "Ahhhh you twats, didn't you look at your monthly spend?"
I'm glad to see we've reached the point of AI discourse at which anything that might be construed as criticism must be prefixed by "I'm also part of the cult, I'm not a non-believer, but" to avoid being dismissed as a heretic.
If I were an engineer at Uber, why wouldn't I select gpt 5.5 pro @ very high thinking + fast mode for a prompt? There's no incentive not to use the most powerful (and thus most expensive) model for even the smallest of changes.
I tried one of these prompts for some tests I'm doing for image->html conversion, and a single prompt cost me $40. For someone that's paying that themselves, I'd pretty much never use this configuration. For someone at a large company where someone else is footing the bill, I'd spin these up regularly (the output was significantly better, fwiw). For engineers they're being rated on what they deliver, not the expenditure to get there.
There are ways to do this cheaply, but there are no incentives for engineers to do so.
I'm not entirely convinced it works out that way so far, but that's the theory.
Trying to bring down LLM costs is sort of a double-edged sword, because the dev needs to be cutting LLM costs by more than what you're paying them. If it takes them a day to bring costs down by $1 an invocation, then it takes almost 2 years to recoup the salary costs. It's worse because LLMs currently change so much I wouldn't be confident that their solution won't be broken before the 2 year period. Will we still be tool calling in 2 years, or will that be something new? Will thinking still be a thing, or will it be superceded by something else? I don't think anyone knows, even the frontier providers.
This assumes that that hour shaved was used elsewhere productively which is not the case.
The AI spend does not appear to be a significant chunk of R&D spending (0.3% in 4 months or 1% annualized). If they didn't plan for it, sure, it's not peanuts in the budget, but in context not that much.
The real question is, what did they get for that amount? The article claims that 70% of the code commit is now AI-generated, so presumably the code passed review and tests. Did it accelerate the feature count? did it reduce quality problems? Did it lead to other benefits?
Sadly the article is silent on the outcomes, besides the higher spend.
Maybe 4 months is too soon to assess the benefits. On the other hand, in an agile world ...
[1] https://www.unifygtm.com/insights-headcount/uber
I've been able to get by with the $20pm Pro subscription and reap great value out of Claude Code.
I feel like it really is about:
- Don't feed it the works of Shakespeare into the context window if all it's working on is a few files. I actually don't have a Claude.md file in my projects.
- I write the prompt as if I was giving instructions to another developer or to myself on how I want to approach a specific coding, with a numbered step plan. I've actually been able to take the details written into a Jira ticket on a work project, feed it into Clade Code, and get really good results from it.
- If you are responsible for the output, then you need to review the output - that does put a natural constraint on the tool's usage, but ultimately it is you who uses the tool, not the other way around.
I feel like that's the thing - you have to find the right cadence, just like with running or driving a car - you need to find the level at which you control the car, at which you maintain a consistent pace, and at which you get code that does what you need it to do and meets the quality threshold you want.
1. You get out of it what you put into it. A savvy CTO might be incredibly excited by everything they can do with agents, and improperly think that all the software engineers can do the same thing, when in reality your org's average software engineers might not have the creativity to even think of many cases where it could save them work. So by mandating agent usage, you might find that productivity hasn't improved while AI costs have increased.
2. When using AI, there are two gaps that become more obvious. First is the gap of: who tells the agent what to do? In many orgs, product isn't technically savvy enough to come up with a detailed spec/plan that LLM can use. And many cog-in-machine developers aren't positioned to come up with the spec, they just want to implement it. By expecting work to be implemented by agent-using developers, you might instead find a lot of idle workers waiting for work to show up. Second is the qa/review cycle. You've introduced a big change to the org but are you really saving cost or shifting it?
I'm all for introducing LLM as optional to help existing developers increase velocity and quality, but I think the "let's restructure the org" movement is really dicey, especially for mid-size or smaller employers.
Beyond that, it's a force multiplier and it doesn't care if the force is positive or negative. Someone with poor software engineering principals can use AI to make an absolute mess quickly.
I am biased because I have more of a product mentality than other developers, but I think these are the people better positioned to be more productive with agents: know enough tech to be able to implement things with agents, and know enough product to know what should be implemented.
I expect other companies to follow.
Yes, productivity implies revenue (or cost reduction), and revenue is measurable.
However:
1. You spend money today to build features that drive revenue in the future, so when expenses go up rapidly today, you don’t yet have the revenue to measure.
2. It’s inherently a counterfactual consideration: you have these features completed today, using AI. You’re profitable/unprofitable. So AI is productive/unproductive, right? No. You have to estimate what you would’ve gotten done without AI, and how much revenue you would’ve had then.
3. Business is often a Red Queen’s race. If you don’t make improvements, it’s often the case that you’ll lose revenue, as competitors take advantage.
4. Most likely, AI use is a mixture of working on things that matter and people throwing shit against the wall “because it’s easy now.” Actually measuring the potential productivity improvements means figuring out how to keep the first category and avoid the second.
This isn’t me arguing for or against AI. It’s just me telling you not to be lazy and say “if it were productive you’d be able to measure it.”
I think the prevailing (correct) consensus is that developer productivity is actually very hard to measure, and every time it is attempted the measure is immediately made a target making the whole thing pointless even if it had been a solid measurement- which it wasn't.
IDK where you're getting the idea here that measuring productivity of anyone who isn't a factory worker is easy.
See the second comment on this article. https://news.ycombinator.com/item?id=47976781
See @emp17344 responding to me.
It's saying that: cost vs revenue is something we can see.
If I buy a plow for $2,500 and it enables growth of $5000, then arguing "the plow was expensive" is a moot point.
It doesn't make any argument about measured productivity, only investment vs return.
Totally but new features in their app or better software are not going to increase Uber's revenue/profit significantly.
We doubt the productivity because we have enough experience with Claude Code to know that flooding your organization with that many tokens isn't just unproductive, it's actively harmful.
They gave up on self-driving, so that's not it.
If only. The optimizations they do on their matching algorithm has made the UX so terrible, I regularly use Lyft instead now.
"X is just Y - why is it so complicated?"
its lazy and boring to read these on every thread about a disliked big company
At the same time the subscription will allow the same usage for hundreds of dollars a month.
Either Anthropic is absolutely hosing API users, massively subsidizing subscriptions, or a little bit of both.
"Cursor estimated last year that a $200-per-month Claude Code subscription could use up to $2,000 in compute, suggesting significant subsidization by Anthropic. Today, that subsidization appears to be even more aggressive, with that $200 plan able to consume about $5,000 in compute"
If 95% of people are using $100 of value a month, the whales may not be hurting them that badly.
I say "Harness" because it's just a web interface that uses `cluade -p` so I can run it in containers and remotely access it.
They are getting you hooked on cheaper tokens, then raking you in when you get scale. I'm sure Uber gets a break on list price, but I doubt they are anywhere near <150 employee subscription pricing.
But things to note:
1) the per user license fee is almost certainly waived.
2) if you look in teams, when you buy extra credit, you get a 30% discount if you buy in bulk.
Unless you default into enterprise from teams, you're almost certiantly not going to pay the list price for per token price
You can cap per user, but not having the rolling cap are you really just going to tell a member of your team “No AI for the rest of the month”
It’s a risky deal as it sets up now IMO.
Years ago I did work for a company that was spending over a million on Oracle product licenses and I was part of the consultant team they hired to rip it all out and just go for simple maintainable code based on open source products. Not only did it transform into a codebase that the average newly hired developer could maintain, you also had the savings of not paying Oracle a significant portion of your revenue.
I feel like that will repeat itself in a few years time with the current cloud and AI train everyone is on.
I haven't been in a professional setting for a while, I just code for fun nowadays so perhaps I'm somewhat out of the loop.
Here's a much better article: https://aimagazine.com/news/why-uber-has-already-burned-thro...
This infers value from spend, which makes no sense. Burning the budget tells us engineers like the tool, not that it's producing value.
Show me how to make two dollars whilst spending one, and budget isn't a problem.
Tokenmaxxing seems more and more like a way to encourage experimentation and learning, and incidents like this are a part of learning. Like, today devs simply use the most expensive model by default, even to do extremely simple things. This is obviously wasteful and costly, and budgets will soon be imposed, but this is how they're figuring out the economics.
For instance, like we estimate story points, we may estimate token budgets. At that point, why waste time and money invoking a model for a simple refactor when you could do it with a few keystrokes in an IDE? And why use a frontier model when an open-source local model could spit out that throwaway script? Local models can be tokenmaxxed, but frontier models will still be needed and will be used judiciously. Those are essentially trade-offs, and will eventually be empirically driven, which is what engineering is largely about.
So economics will soon push engineers back to do what they're paid to do: engineering. Just that it will look very different compared to what we're used to.
That's...not exactly a lot per engineer. It sounds like they just didn't budget correctly. Especially if the net of that work is more features that would have otherwise required hiring more engineers, which would cost a lot more than $500 to $2000 a month.
And i'm not talking about some genies 10x developer who is working with multiply git worktrees on x tasks in parallel in high quality
> what started as an experiment in productivity became a runaway success
and
> figuring out if the company can afford this level of productivity at scale
It seems like they're equating "developers are spending a ton of money on this" with "this is creating a ton of value".
I'm not saying that AI tools aren't valuable, but the article doesn't question this equivalence at all.
[0]: https://finance.yahoo.com/sectors/technology/articles/ubers-...
https://investor.uber.com/news-events/news/press-release-det...
That's a bit of a logical leap with no demonstrable increase in productivity.
All this shows is that they're spending a lot more on AI than they budgeted for. Nothing else.
You get what you measure.
I'm considering rolling out something similar but am not sure if it would exceed the expenses of Claude Code Review at an estimated $20 per PR.
Exactly how Anthropic, OpenAI and co are selling it.
And it works because it won’t stop until the rust compiles. But the code is garbage and makes bad decisions that no junior would. Unmaintainable junk and sometimes I spend more time refactoring than if I would of just built it myself.
People here talking about generating 100ks LoC a month and I’m wondering if it’s a skill issue with me, or Codex or if I should pull all my investments out of companies heavily invested in AI like uber.
Surprised Pikachu moment.
And it's going to become even more expensive when AI companies start charging to actually make a profit.
or did the engineers just chill and let claude take over daily duties? (this is also a benefit for employees in my opinion)
I wonder how this will end as AI becomes more expensive to use. If you can't quantify ROI then I guess you're cooked.
Also wonder if there is some perverse incentive for models to be verbose to juice tokens.
Successfully burning through cash and tokens, alright, but what have they gotten out of it?
As a founder, the question I always have is "what is the marginal value per token relative to engineer-hours saved." More of a gut feel at the moment, but would be great to calculate.
When you enter one single inquiry of "find and fix the memory leak in the billing service" you are not submitting just one single inquiry. The tool is searching through an entire code repository for relevant code, pulling 15 related files into context (easily 200k+ tokens) proposing a fix, running the test suite and failing, taking an entire stack trace of errors into context and looping to keep iterating towards the solution.. In that process you can loop multiple times (10+) in a very short period of times (within 5 minutes). While you grab a cup of coffee you will have consumed $20 in token usage. At the enterprise level (like with Uber) when you multiply that out by thousands of software developers using it as a personal shell tool your budget disappears very very quickly.
And on your point about the junior developer: Comparing $100,000/year in tokens to hiring a junior developer is such a ridiculous false equivalency that even makes you question whether they even understand how to make such a comparison.
The cost to a business of one junior engineer with a $100,000 salary is not just the $100,000 in salary but also an additional $40,000+ in benefits and taxes, as well as in hardware.
Also, you are disregarding another cost of hiring junior engineers that is their mentorship cost. Each week, your senior and staff engineers spend hours mentoring junior engineers by reviewing their code, pairing with them, and unblocking their progress. Mentoring requires a substantial amount of time and will be expensive to your business.
The return on investment (ROI) for the $10,000 monthly expenditure on tokens is not so much about replacing the junior engineer with the AI. Instead, the ROI is that your senior engineers can use the huge amount of compute power to create boilerplate and tests, and refactor their code 3x quicker than if they had to mentor junior engineers. In addition, LLMs do not sleep, require one-on-ones, or leave for another company for 20% more pay in 18 months, when the value to the code base made them an asset to your business.
Lastly, the main reason that Uber has problems with their AI business is that due to the UX of these agentic tools, developers think of the API calls made to the AI as free and as a result, treat them like a basic grep command.
Imagine itegrating dozens of payment methods - many of them highly localized - across emerging and developed markets, while dealing with fraud, chargebacks, KYC, AML, and settlement complexities.
Imagine processing trillions of data points every day - rides, location updates, pricing signals, ETAs, traffic conditions, demand forecasts, payments, support events.... storing it efficiently, querying it in near real time, generating reports, and keeping the whole pipeline reliable. I have woorked in data engineering, and can tell you confidently that this alone requires an enormous R&d budget.
Then there are the apps - not just customer-facing, but driver-facing, courier-facing, merchant-facing, fleet-management, onboarding, support, operations, compliance, finance, and hundreds of internal tools and dashboards.
Then come the integrations. Companies running at Uber's scale genemrally have hundreds of tjese - mapping providers, payment processors, banks, identity verification, tax systems, telecoms, customer support platforms, fraud detection, analytics, ERP, CRM, and more.
... And then there are even more...
Real-time routing and dispatch optimization
Dynamic pricing and marketplace balancing
Fraud detection and account security
Driver/rider safety systems
ML models for ETA, demand forecasting, incentives, and churn prevention
Experimentation infrastructure for thousands of A/B tests
Reliability engineering across globally distributed systems
Data centers / cloud optimization at massive scale
Localization across languages, currencies, addresses, and cultural norms
Customer support automation at global scale
Autonomous vehicle research, mapping, and computer vision
... to be fair, this is all what I could thing of based on my own work experience in related fields... there is definitely as many more systems in reality as mentioned abpve.
They are using it to mean a mechanism that produces prodigious amounts of toxic waste. That does not conform to the historical understanding of the word.
... but the key fact about "$500-$2000" per engineer does not appear there, and seems to be fabricated.
Where oh where can I find clients like these??
How are they calculating that? They could be using my tool, Buildermark, but I do t think they are: https://buildermark.dev
A codebase that has exploded in size 2-3 times in just a few months,... internal architecture that is not layers of simple parts anymore, but, layers of complex architectures corresponding to individual agentic runs,... a codebase that now has 10 times more if-else and individual codepaths because you were not clear enough in your requirements, and used the phrase "handle all cases",... a codebase that neither you, nor anyone else now understands properly, thus, can't comment on what's possible anymore, and and at what costs when your manager or PM asks,...and finally, due to combined effect of these, a need for an ever increasing token budget, and constantly increasing fragilty of new AI-generated code due to repeated context compactions.
And we haven't even touched on the security and performance elements yet.
The right way to use these tools is to use them as, what I like to call, "code-monkeys". You tell them exactly what you want, where you want, how to do it, and how to architecture it, and more.. and then make them code.
I’ve been using all these tools since they started popping out around 2021 personally and professionally. I probably built four or five products at this point with assistance, not to mention the thousands and thousands of back-and-forth conversations for research or search or rubber ducking or whatever.
I have never spent more than whatever the professional max plan is that is consistently $20 a month.
I asked a friend of mine who spent a couple hundred dollars in like an few hours how they did it. The answer was they basically getting these agent groups of agents stuck in a loop and they’re constantly just generating verbose bullshit that is not even interrogated and doesn’t come out with any artifact that is inspectable no matter how expert you are.
The couple of stories I have heard of these massive crazy spends are people literally just assuming these things can complete an entire human task in one shot, so they continue to hit the “spin the wheel” button until they get something closer to what they want
But I’ve yet to see that actually work
and it actually flies in the face of every instruction guide or documentation or prompt engineering process that has been described over the last almost 5 years