Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
71% Positive
Analyzed from 1891 words in the discussion.
Trending Topics
#more#tasks#cost#efficient#still#tokens#need#token#archive#spend
Discussion Sentiment
Analyzed from 1891 words in the discussion.
Trending Topics
Discussion (45 Comments)Read Original on HackerNews
I think the push from management for us to use AI has made it so we don’t have to be efficient with our consumption, so now we write md files which we feed to Claude in a loop instead of python and bash scripts to do routine tasks.
We're all being measured to AI usage, so...
Instead of doing a grep | uniq | awk which would give me an answer in 100 milliseconds for free, I launch a prompt to spend 30 seconds on it and will cost some actual money.
I hope we get over this phase of the hype soon. I want and will use AI as a tool, but it's just another (good) tool in the toolbox.
When I need to do a one-off investigation, it's great to use AI and spend 5-10 minutes querying and get my answer for $5 or so, instead of having to spend 2-3 hours writing a script which I'll discard. That's a great use case.
But using AI for routine processing done daily where writing a script would be amortized over thousands of runs, it's insane. I'd rather use AI to write the script and then don't need the AI anymore, the script will be faster and free. Oh but then my AI usage in the executives report drops. Can't have that. Waste away.
Part of that is the ridiculous belief that they can create "AGI" by just glueing together enough LLMs.
Presumably it's also financial viability. You can't charge thousands a month without replacing those "highly trained engineers" with a bunch of kids in the developing world.
It's worse than that, in many cases management actively rewards inefficiency. It's like Friedman's "why not spoons?"
For an engineer paid $100/hr to write a 150 line Python script and test it to the same extent could take a few hours, so the total costs rise meaningfully.
A Chinese factory can train sweatshop workers in two weeks on a new pillow design. A dedicated machine costs millions and can't pivot. Human labor wins not on capability. The machines exist. It wins on flexibility per dollar. And the ratio still favors humans by an order of magnitude in most categories.
Agent replacements are the dedicated machine. Their real cost isn't tokens. It's tokens plus the engineer wrapping them, plus orchestration, plus the supervisor, plus the eval pipeline, plus the rebuild every time a model version subtly changes behavior. The team you replaced could pivot in two weeks. The agent stack can't.
Flexibility per dollar is the gap.
Stable environments naturally drive populations towards more specialized actors in niches as they benefit from efficiency. Think of leverage in the financial economy or the dinosaurs.
When a big system disruption inevitably arrives, you better hope you still have some depth around with adaptable general populations that can survive the crisis and occupy the new environment. Think of Minsky moments and the K-Pg event for the dinosaurs 66 million years ago.
Another example would be stem cells vs organs and their specialized cell types.
It seems to me like you need enough regular change to avoid overspecialization and preserve the ability to survive large changes.
because companies will need
“proof of productivity gains or metrics that show a clear return for all this AI investment.”
which in my opinion is simply not true. I haven’t seen any good study that showed AI to actually improve productivity overall. It massively helps in some areas, but then promptly gets stuck in others. So you still need an expert to guide it.
AI is overhyped, but on the other hand, I think it would be difficult to deny the significant productivity increases when used appropriately.
For some tasks, it's huge. Some tasks that I might've spent 8 hours on, I can do in 20 minutes. That's very real and huge.
At the same time, that's not the average that I experience. Some things are pretty much a wash. Others might be 2x or 3x faster which is quite nice, but short of the hype. And some things can be very clearly slower with AI. Also some things are more unreliable with AI.
We need to get to a maturity point where we realize it's just another tool. An incredibly powerful one for many tasks, yes. But it's not magically the right tool for everything and not always the right answer.
I think we have all heard of (or are living through) mandates to prove that AI makes us more productive, or else...
We'll see how many of these actually works out.
After I discovered how to use git worktrees in Codex to work in three conversations in parallel, I am able to build apps with a scope that simply was not realistic before.
There was one feature/screen that Codex built in a single 5k LOC file.
It was still perfectly capable of developing the feature and it was working as expected.
I had it break it down into multiple files, but if I wouldn’t have seen it during the MR review, I would not have noticed. The large file did not seem to degrade the performance of the agent.
You might think that this would lead to a mess with merge conflicts, but the agent can resolve them automatically.
I added an instruction to AGENTS.md so that before handoff it fetches and rebases, resolving conflicts if needed plus rerunning the tests.
I think its the same disease that makes people make shitty, unoptimized, bloated apps because modern client machines ahve so much ram. But that wont work AI agents. Not until tokens become dirt cheap anyway. Until then we'll need apps with more efficient usage patterns
People are willing to accept the fact that the token price will come down or efficiency will go up even more! Meanwhile, they are sure of the cost of human workers from decades of data we’ve had.
So... if you spend $3m to replace a $1m team... you are betting on that $3m cost coming down. It's a proof of concept. The first step is to find out if agents can do the job at all. At this point you are hoping future versions will get more efficient.
Trying to make something efficient before you know that it is even possible is hard.
Drop-in, profitable on day-1 isn't what the frontier looks like.
There's already many things that can be done now to bring down token use. Better planning, tests, Language severs, MCP compression. Don't use claw, teams, swarms, Ralph loop, scheduled tasks unless there is a clear use case.
The point is that efficiency comes after, not before.
Yes there is, because I have made it, basically which archives archive.is pages to archive.org (I have listed it way too many times but feel free to find it in my submissions)
https://web.archive.org/web/20260427063707/https://serjaimel...
hope this helps ya.
you have there more options, seems removepaywall.com option works as well
1. Build-out and Competition: (current phase) Multiple AI companies write down massive debt while building data centres and offering sweetheart deals to customers in an attempt to dominate the market. The financial numbers will be silly by design in this phase because it's all predicated on obliterating/outlasting the competition so you can move on to...
2. Enshittification and Exploitation: With most competition wiped out, the survivors will have to pay their debts. A chainsaw will be applied to every corner that can be cut (and many that shouldn't). Prices will be jacked up mercilessly.
3. Maturity: Eventually, once debts are paid down, the technology will reach the point where it's cheap and omnipresent. It might be good. It might not be. e.g. Web search is "mature", but it kind of sucks right now.
AI users are going to become more efficient in how they use it and they're also going to learn when AI is appropriate to use and when it isn't. AI itself will likely improve long-term, but it may get worse at times. It's definitely going to get much more expensive. The math is going to change during each of these phases. Businesses who torch their human capabilities and become dependent on AI during Phase 1 are headed for rough sailing in Phase 2.
The best part for their AI argument lowering fees - the AI is crap, it can help with QA, but still 98% of reports are false positive and can't really do almost any task.
So told them to feel free to replace me with AI if they think AI can do my job and send me only tasks AI can't do, but keep my rates same (the reality is AI by itself can't do any of my tasks) and still didn't warn them about introducing new rush/holiday fees I am not charging yet and are included in the rate + new AI fee for tasks AI simply can't do. Only result will be, maybe I will get less tasks, but I will make sure to charge more for those AI can't do.
It gets worse when you look at LLM (or even any other kind of AI) benchmarks, they tend to cap out around 110% of human performance.
The more that LLM services try and creep towards profitability, the more features they are paywalling behind higher tiers, the more some lazy junior dev is going to look like a better value proposition.
And when some of the CTO's they have pushed LLMs on to go looking for cost savings, some of them are going to look at opex instead of capex and in house the LLMs using open models.
The only real question to my mind is whether the air will be let out of the AI balloon slowly, or if it escapes in one big pop.