RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
75% Positive
Analyzed from 1844 words in the discussion.
Trending Topics
#code#software#agent#experience#still#developers#llm#coding#faster#need

Discussion (30 Comments)Read Original on HackerNews
Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns.
The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again.
At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable.
In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well.
Eventually, a person is still needed.
If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility.
If AI replaces everything, then I become unnecessary. So maybe I am simply trying to convince myself that developers like me are still needed.
That said, realistically, I still think there are limits unless the essence of architecture itself changes. I also acknowledge part of your perspective.
Those of us who are not in the AI field tend to experience AI progress not as a linear or continuous process, but as a series of discrete events, such as major model releases. Because of that, there is inevitably a gap in perspective.
People inside the industry, at least those who are not just promoting hype, often seem to feel that technological progress is exponential. But since we are not part of that industry, we experience it more episodically, as separate events.
At the same time, capital has a self-fulfilling quality. If enough capital concentrates in one direction, what looked like linear progress may suddenly accelerate in an almost exponential way.
However, even that kind of model can eventually hit a specific limit. I do not know when that limit will arrive, because I am not an AI industry insider. More precisely, I am closer to someone who uses Hugging Face models, builds around them, and serves them, rather than someone working on AI R&D itself.
Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past.
Will there be some PMs that can become "software developers" while armed with an LLM? Sure!
But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say.
The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience.
I think my reasoning is you still need a tech person to translate from feature to architecture. AI can do both but not everyone knows they need the latter.
the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.
i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.
if you get the base right though, the agent can make precise changes in large code bases
I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure.
I think this depends on the setup, and it is ultimately a trade-off.
In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size.
At that scale, I felt that agentic coding was actually very good at building the initial skeleton.
I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently.
In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way.
Like a child growing up!
Also, like a cancer.
Similar process, different outcomes.
If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension.
Your task management has to match, the organization of your ticketing system has to parallel the codebase.
With this the LLM can think at different scales at different times.
Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me.
But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated.
Both types expect you to spend as many tokens as possible so that the AI bubble doesn't burst (presumably because leadership has a financial interest in this).
Your actual productivity isn't important. If you point out that you're much faster writing code on your own in 90% of cases, you will be told you're not good at AI, you're not prompting it correctly and that generally you're not AI-native and that you'll be left behind. To be precise, token usage is a performance metric, so you'll be let go if Claude is not running continuously 8 hours a day.
I'd like to know how many places have mandates to write 100% of your code using AI, as well as to max out your AI agent's plan. For some reason nobody talks about it even though I know several companies around the world that are forcing this on their employees.
If you're looking for a job then you don't have a choice, it's better to have an income. But if you're looking to change jobs to get away from AI to actually be productive and gain experience then it's a very bad job market.
I'm searching for a job for many months, and I do see the uptick quite clearly.
Fashion is when developers jump on the next web framework because they got bored of the old one.
But when you get fired for not enough token usage, that's something else. When bosses start demanding you write 100% of your code using AI, and then a few months later Anthropic reports 30% increase in usage, that's not fashion. People who invested in AI are putting a lot of pressure on developers to ensure their investment pays off.
Token billing is coming very-very soon, there won't be a "plan".
What will these companies do then?
Unavoidable AI-based productivity growth, in software and in all the other industries, will lead to the software, specifically AI in this case, not just eating the wold, it would be devouring it. Such AI revolution will mean even more need for software engineers, just like the Personal Computer revolution and the Internet revolution did in their times. Of course the software engineering will get changed like it did in those previous revolutions.
There is no productivity growth attributed to AI. In fact, serious attempts to measure AI performance show that even if AI makes some code entry tasks faster, total product delivery times are, in fact, increased.
(This should be obvious once you realize coding AIs are technical debt generation machines.)
I think we've gone beyond anecdotal evidence of experience engineers finding true value in this new tech. It may not have registered yet, but skilled people are unequivocally finding value in these tools.
I agree that we have yet to settle down on the true costs involved (which will probably end up at "slightly less than a junior engineer" or something like that) - but we are months beyond the idea that it's all smoke and mirrors and no one is getting value out of it.