Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
64% Positive
Analyzed from 10441 words in the discussion.
Trending Topics
#code#need#write#coding#don#more#still#same#better#agent
Discussion Sentiment
Analyzed from 10441 words in the discussion.
Trending Topics
Discussion (146 Comments)Read Original on HackerNews
The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it.
I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it.
Due to English-language limitation my most adult life, I struggled to code. Used visual coding etc. But of course, I can't make a living on drag-and-drop harness.
Comes in GPT-3.5, accelerated my learning. Now I'm running my incorporated company, just launched one software-hardware hybrid product. Second one is a micro-SaaS in closed beta.
The point is: when people use "juniors" as a fixed shaped blobs of matter, they focus on the juniors that were in any case going to make mistakes: AI or not. Misses the key point of agentic usage.
the bar to "start" is lower and the bar to actually competency is higher now, juniors who want to actually learn instead of just pressing enter over and over again will do so regardless of whatever you do to "help" them.
This doesn't happen at all for using agentic coding: What the programmer wants and what the boss wants are pretty well aligned. There are corner cases where someone isn't allowed to use LLMs, but does it anyway, but in most cases, the organization agrees.
It’s not all that different than writing code directly and having it turn into a mess they can’t debug—something we all did when we were learning to program.
It is in many ways far easier to write robust, modular, and secure software with agents than by hand, because it’s now so easy to refactor and write extensive tests. There is nothing magical about coding by hand that makes it the only way to learn the principles of software design. You can learn through working with agents too.
Don’t forget to mention that.
Y'all need to stop worrying about the kids.
They're smarter than us and will run circles around us.
They're going to look at us like dinosaurs and they're going to solve problems of scale and scope 10x or more than what we ever did.
Hate to "old man yells at cloud" this, but so many people are falling into the trap because of personal biases.
While the fear that "smartphones might make kids less computer literate" is true, that's because PCs are not as necessary as they once were. The kids that turn into engineers are fine and are every bit as capable.
What is important is not being afraid to learn the rest of your system and keeping an index.
Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand.
When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base."
That holds even within just large code bases.
I disagree with this take. Personally, I pride myself in learning the code bases I work on in detail, sometimes better than the leads for those code bases. I’m not saying that everyone should do so, but it’s achievable and not naive at all.
Nothing in the article made that claim.
From a person perspective though, I'm apprehensive about the effect AI will have on the human "very well read intern." People who know a lot very deeply about specific areas are fascinating to talk to, but now almost everyone is able to at least emulate deep knowledge about an area through the use of AI. The productivity is there, but the human connection is missing.
The questions came flying in fast, without any introduction, and this was about an external integration out of a dozen. They have their own lingo, different from ours, to make the situation worse.
I had a _very hard time_ making sense of the questions, as I indeed relied heavily on a model to produce these integrations (extremely boring job + external thick specs provided).
I'm still positive these would have simply not happened in a 10x the time if I did not use models, however, I'm now carefuly considering re-documenting the "ohhs" and "aahs" of these so that these kind of uncomfortable moments never happen again.
I haven't felt so clueless and embarassed in a meeting, ever. All I could say was "I'll get back to you on that one, and that one, and this one".
Cognitive debt is very real, and it hurts worse than technical debt on a personal level! Tech debt is shared across the team, cognitive debt is personal, and when you're the guy that built the thing, you should know better!
To be continued... But from now on, the work isn't done if I don't get a little 5 mins flash-card type markdown list of "what is this" and "what is that", type glossary.
This is a common thing doctors complain about. Patients come in, saying they just need a prescription for some drug or other. Good doctors often refuse to give any drugs or any advice until they understand the whole situation properly.
If you're a senior developer, you're the one who has to push back against behaviour you don't like. You have the authority. "Hm, interesting question. I'm going to need more context before I can give you my point of view. Can you give me a quick overview of the system architecture / explain what actual problems you're trying to solve with this approach?"
I think what the OP is saying is that it's the OP's job to know that, and didn't, because they over leverage the LLM.
Like if a doctor was brought in on a cardio consult on their patient because they had a maybe unrelated heart condition, and the only thing they could answer to "why did you prescribe cemidine instead of decimine" is "lemme get back to you on that."
"I'll need to study the docs and code to answer these questions properly" is a perfectly fine (and very diplomatic) response to treatment like that.
But now they're not an expert in the code they've recently committed.
Maybe that's OK and expectations need to change, but I'd bet there are a lot of cases where the organization really wants to produce a (code, expert-in-the-code) pair, and should be willing to pay a little time to do that over producing just (code, guy-who-prompted-it).
It's quite common to search for the author of a piece of code to ask questions about that code.
An additional factor: to find issues in generated code, the developer has to care. Many developers (especially at big firms) are already profoundly checked out from their work and are just looking for a way to close their tickets and pass the buck with the minimum possible effort. Those developers - even the capable ones - aren't going to put in the effort to understand their generated code well enough to find issues that the agents missed. Especially during the current AI-driven speed mania.
Since LLMs have no internal evaluation, as a reviewer one has to account for it and evaluate line by line, rebuild from scratch any hidden rationale and tacit knowledge the LLM didn't have in the first place - only to be mislead into non concerns draining costly hours.
At this point, the investment is often deeper than writing from scratch.
First, you've got to plan everything, using whatever Agile or Waterfall planning ritual your company uses, get the task breakdown, file the JIRA tickets, decide who's doing the work. That all can take days or even weeks. Then you need to write a design doc with your proposed design, and get that reviewed by your peers/teammates. Again, another week for any substantial feature. If there are multiple teams involved, you need to get buy-in and design agreement among those multiple teams, let's add another week. At some places, you need approval to commence work, which can take multiple days, depending on the approver's schedule and availability.
Then, you take a day and write the code and make sure it passes tests.
Then, it's code review time, and this can involve a lot of back and forth with your team, resulting in multiple iterations and additional code reviews. Another "days or weeks" stretch. At bigger companies, you're going to need to pass all sorts of reviews from other departments, like legal, privacy, performance, accessibility, QA... even if done in parallel, let's add a conservative 2 weeks. Finally, you push to staging, and need to get some soak time internally among dogfooders, so you have some confidence that it's working. +1 week. Then you're ready to push from staging to prod, but since you work at a serious company, nothing goes to 100% prod right away--you need to slowly ramp up and check feedback/metrics in case you need to roll back. The ramp to fully launched could take another two weeks.
So here's a feature that took, what, maybe two months from design to release, and we're falling all over ourselves to optimize the part that took a day so that it takes 5 minutes instead...
2. technically risky ideas that you never would have tried because it didn't make sense from risk+effort/reward standpoint are now within reach. it isn't "go faster" per se but the speed at which you can try something out still changes the nature of engineering process.
I confess that I don't understand why this isn't true, because it seems to be true on the micro level, but it really hasn't been my experience. The platform engineers I'm familiar with are desperately trying to tread water to keep their systems healthy against the now-higher code velocity without falling to pieces. (Perhaps people used to make minor day-to-day improvements while coding that Claude enables us to ignore?)
This reminds me of one of my software engineering axioms:
> So here's a feature that took, what, maybe two months from design to release, and we're falling all over ourselves to optimize the part that took a day so that it takes 5 minutes instead...Well said.
Ask the agent questions about all the other teams' code, reaching out to them for questions it can't answer or clarification. With agent capabilities atm this is rare or can be done fairly async: "please confirm these things".
Maybe realise your code architecture is completely wrong. Manually code up some new abstractions that fit better, write the learnings into the spec plan. Strip out any implementation that largely doesn't fit your updated abstractions. Ask the agent to migrate the code to the new structure.
Repeat until spike is operational and you're happy with the abstractions used
Chat with the agent to create a Design Doc for the approach in the spike. Create a single JIRA ticket for "Productionise CodeShmode's spike". Get reviews and feedback from stakeholders.
Integrate feedback into your spike, or even the original spec document and regenerate the whole thing.
So much of the ritual you've outlined here is overhead from working in a large org where roles are siloed. When one person is empowered to do more then the actual work per person goes down and the overhead becomes the dominant. But that overhead isn't needed anymore because one person can now do many people's work.
I've whipped up spikes in a few days that would've been a month of work across a team multiple DDs and approvals. In the past this wasn't feasible so we would need to justify what those people would work on. Now you can whip it up, show a working demo and ask "should we productionise this"
Big tech has a lot of wankery like that but smaller companies can be fast and scrappy
Short-lived tightly-scoped agents can do alarmingly thorough and high-quality knowledge work, as long as the work itself is relatively mechanical and can either be carried out in independent chunks or sequentially. For example, a research agent like the Gemini "deep research" tool can save hours of digging around the web and compiling information. With careful prompting, sufficient background context, and good self-evaluation tools, an agentic loop can do very detailed data analysis, carry out serious statistics and machine learning projects, produce high-quality data visualization thereof, and put together a handy executive summary.
They occasionally hallucinate, go off track, get confused, and make mistakes. But they "know" everything that's been published in English for the last 200 years, they never get tired, and the code they write is good enough for throwaway scripting. The real power of agents being able to write code is that they can be extremely self-sufficient and flexible in carrying out these kinds of tree- and sequence-structured knowledge work tasks.
That's of course a different thing from "designing good software", which is neither tree-structured or sequential, and requires a level of intelligence (for lack of a better term) that LLMs do not seem to be capable of, at least not yet. But that's a more specific thing than just writing code in order to get stuff done that happens to require code.
Ai writes the plans now. I just review and modify.
There is skill loss from heavy AI use.
But I want to acknowledge the awkward elephant in the room. AI Is making people too fast. I don't mean that a faster output is bad. It's a faster output and code rather than a full understanding and experience in producing the code. It's rewarding people who try to talk about business value rather than the people that are building and making safe decisions with deep knowledge.
AI: Yes, its good and it can produce some good solutions, however it ultimately doesn't know what it's doing and at the best of cases needs strong orchestrators.
We're in a cesspit of business driven development and they're not getting the right harsh and repulational punishments for bad decisions.
It's not just businesses doing it either, I regularly see big PRs get merged on open source projects that seem fine on the surface but contain a 1000 paper cuts worth of bugs (not critical, but just enough to annoy you)
On top of that, the code wasn't idiomatic C++ (for this specific project) and the LLM completely ignored available APIs. Sure, it can be fixed, and maintainers should've caught it, but the amount of code being generated requires so much energy on everyone's behalf.
I do agree that if we just rely on AI for all outputs and some reviews (at least to a threshold, because we simply can't keep up with the AI throughput as humans) we will eventually have skills atrophy. Here's where the tangents intersect: I've been working on a way to have the best of both worlds. We can still use AI to generate a large swathe of code, but use good old software engineering to do it. My project (https://salesforce-misc.github.io/switchplane/) inverts the control. Rather than having LLM-as-runtime and doing all the things, you define and write LangGraph control flows that only use the LLM when judgement is actually required. The basic principle is:
If it's deterministic, write it in code. If it requires judgement, use the LLM.
Switchplane itself is local-only but the principles can be applied to deployed agentic services as well. Because the approach is code-first, we can have that vendor independence: Use whatever model you want anywhere in the graph. One goes down? No problem. Swap the config without impacting the overarching control flow.
Cost becoming a factor? Limit LLM loops or constrain their access however you want. It's just code that needs to be updated. You control the runtime, not the LLM.
Concerned about non-deterministic behaviour when you need determinism? Don't be. It's in code.
Worried about skills atrophying because we're handing off everything to an LLM? That's mitigated somewhat here because you still need to think in systems in order to build execution graphs in the first place.
It might not demo as well as a number of markdown files being executed by an LLM. It's definitely a more reliable approach in the long run though.
Nothing stopping you from iterating with the agent till the code is the exact same quality that you yourself would write
I think that’s mostly true, but also I think there is some skill to using agents well. Specifically, work with agents to get a really good product requirements document, then task it out into very narrow user stories / vertical slices (this takes some iterating—the AI really seems to want to think in horizontal layers today), then maybe walk through the code interfaces to be super sure you are aligned. At each step, I make the agent interrogate me thoroughly with every question it can think of, and even if we stop now we will have a system design and tickets that are much higher quality than me thinking alone. I could hand those off to anyone to implement, but I think having an agent TDD their way through the code is the sweet spot.
Whenever the agent is doing something I don’t like (e.g., some coding style thing), I pause and have another agent help me write a style guide that agents must read. This slows me down at first but I think it will pay off in time.
I don't want my code quality, I want AGI code quality - that's what I was promised and jetpacks and flying cars too!
I will admit there are occasional times after iterating so much I’m not sure if I’ve even saved time because going from “it works” to “it’s up to quality” takes so long
And yea usually does for me
If you are coding by hand like the old days you are probably not literally writing everything from scratch anyway, you are copy pasting a bunch of shit off google and stackoverflow or installing open source libraries.
Wait, is this the same AWS I have been using?
I was just looking through HN search for "show HN", and I saw many fitness and calorie tracking apps.
A lot of them disappeared just after a few months of launch; a few of them survived a year, then died out as their domain name expired.
People are making things, but they are not reaching their "audience".
I created https://macrocodex.app/, launched on 16 Mar 2026, and reached 10,000+ monthly active users.
Fitness/Calorie tracking is a competitive space where there are tons of apps and services.
I could never have built such an app because I do not know how to design pages; I can talk to a designer, but from past experience, it takes them a long time to understand what the market wants and projects. And companies with small budgets find it very difficult to find a good guy.
Many of my projects never got shipped because I dreaded making landing pages, icons, UI, etc.
I am not saying we did a very good job with AI on landing pages or UI at all; that's not an area of my expertise; the domain knowledge is, but the fact that many people find it useful, I think I’ve succeeded.
I've even put a ticket system in the app for support and received a few bug reports, which I resolved.
Here's the latency of my other service: https://prnt.sc/6474F4gba_he
I no longer use managed services in AWS, and my costs are very low; this enables me to offer my apps and services for free to many users.
If this was an actual paid job, I do wonder how that would change my LLM use. The reason I'm a software developer at all is because I love the craft. The act of building, of using my brain to transform ideas into code... that's what I enjoy. If it was just prompting an LLM, would I still do that job? I don't know. I'd probably start looking into the idea of switching careers, at least.
I still reject > 50% of AI suggestions, because they're too mediocre, like moving code for no reason or sometimes it is just plain wrong.
Sometimes hard like interesting and you get to do really novel thinking. A load of p2p/decentralised things are hard like this.
Also sometimes hard like you get to a particular challenge and it turns out to be a notoriously unsolved mathematical thing, or you push against subtle boundaries of core libraries, runtimes, systems etc. Working with metagenome assemblies is this kind of hard.
Honestly the hard code I've done made such a difference to my brain. There's plenty of trivial stuff I'm happy to have automated, but of I can't work on the hard problems I may as well not be involved at all.
Now if your career is built on writing out the same boilerplate code in its infinite slight variations every day, congrats, you've been automated. Thank god we can free up our intellects to focus on the actual hard problems, the ones that are somewhat cutting edge, the ones that actually push our field and humanity forward.
Literally every example of AI generated code (without significant human input) is just basic stuff that is wholly unimpressive. Oh wow, you had an AI generate a Next.js app? It's writing HTML for you? It made a generic SAAS? Guess I'll become a farmer now.
Or, wait, I'll continue to write my multithreaded real-time multiplayer network for a MMO, since the AI currently generates something that would get me fired 10 seconds ago if I tried to push it to production.
It's amazing how you introduce just the slightest difficulty or novelty to an AI and it just craps the bed. And then you go online and apparently we're gonna be replaced -6 months ago or something.
People need a reality check.
You will still need to QA stuff and review PRs, but I think AI done properly can genuinely make some tasks better.
The result of that though would be establishment of development patterns that are good practices.
The rule of thumb is: An agent can write it, but a human has to understand it before it gets pushed to prod.
I'm still not convinced about the doom and gloom over developers being replaced. I'm not a dev as part of my main job function, but where I do use LLMs, it has been to do things I couldn't have done before because I just didn't have time, and had to de-prioritize. You can ship more and better features. I think LLMs being tools and all, there is too much focus on how the tool should be used without considering desired and actualized results.
If you just want an app shipped with little hassle and that's it, just let Claude do most of the work and get it over with. If you have other requirements, well that's where the best practices and standards would come in the future (I hope), but for now we're all just reading random blog posts and see how others are faring and experimenting.
Yeah, likely
> development patterns that are good practices.
Wait, now you lost me
The article essentially claims that no, that line of thinking is false. If the agent writes all of it (or too much of it, where "too much" is still not well defined), then your ability to understand it will atrophy with time, and you will either a) never push to prod, because you can't understand it well enough, or b) push to prod anyway, and cause bugs and outages.
I think the article is correct.
> I'm still not convinced about the doom and gloom over developers being replaced.
Agreed. The agents are just not good enough to write code unsupervised, or supervised by people without senior-level skills. And frankly it's hard to imagine them getting there. Each new release of the coding tools/models is a mixed bag. Some things are better, some things are worse, and the gains are diminishing with each iteration. I am afraid that we're going to hit a ceiling at some point, at least with the transformer architecture.
> but for now we're all just reading random blog posts and see how others are faring and experimenting.
Yes, exactly, and many people are not faring well. The article cites several examples of people feeling less capable after using LLMs to write code for a while.
Also, let’s not forget. The developer is rarely the person pitching the feature, and is normally given the constraints and the PRD…
Soooo people can keep tiptapping on the keyboard, but eventually they need to open their mind to the possibility that “the old way” is actually dead.
“The market can stay irrational longer than you can stay solvent” quote is usually applied to markets, but it can be applied to software engineering as well - all jobs can be gone even if world will be submerged into technological crisis, with single nine availability (and I'm talking about 9% :) ) and all accounts compromised.
its easier for me to code now, because its like i have a 24/7 insane intern that needs to be supervised via pair programming but also understands most topics enough to be useful/ dangerous.
ironically ive been spending much of my time iterating on ways to improve model reasoning and reliability and aside from the challenge of benchmark design, ive had some pretty good success!!
my fork of omp: https://github.com/cartazio/oh-punkin-pi has a bunch of my ideas layered on top. ultimately its just a bridge till i’ve finished the build of the proper 2nd gen harness with some other really cool stuff folded in. not sure if theres a bizop in a hosted version of what ive got planned, but the changes ive done in my forks have made enough difference that i can see the different in per model reasoning
My sense is that a decade from now, the people who generally see their place as the driver seat but recognize when its not are going to be writing the code that matters.
You can debate with agentic coding who is monitoring and who is flying but, if we assume the user is monitoring what that means, in practice, for me is that I'm reading and making sure I understand all the changes the agent is proposing to make, as well as providing instruction, guidance, correction, etc. That includes reading and understanding all the code changes.
But I still want to be in touch with coding by hand and have ventured into systems programming, outside of work, which I feel AI is less useful for currently.
However, the code review study needs to compare between surface scanning and reviewing long enough to get over a theoretical slough of perspective: when you assume the coding chair and are in their frame, whether the brain shifts into a different cognitive mode.
Otherwise, just stamping "Looks good to me" is likely to lead to the same atrophy. There's no critical thought, even a self-summary of the change or active questioning.
Thoughtful, deliberate code review just plain takes longer. AI can help here a lot, although it still takes over the "get into review mode" process.
And they will deserve it.
Code review alone is kind of like being able to understand a foreign language enough to read it, but not really understand it in flowing conversation or being able to speak it, much less construct a complex piece of literature.
Retention also suffers, as you will quickly forget what you just reviewed. What is the last PR you remember?
> An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism.
My question is why isn’t there an effort from the author to mitigate the insane things that LLMs do? For example, I set up a hexagonal design pattern for our backend. Claude Code printed out directionally ok but actually nonsensical code when I asked it to riff off the canonical example.
Then, I built linters specific to the conventions I want. For example, all hexagonal features share the same directory structure, and the port.py file has a Protocol class suffixed with “Port”.
That was better but there was a bunch of wheel spinning so then I built a scaffolder as part of the linter to print out templated code depending on what I want to do.
Then I was worried it was hallucinating the data, so I wrote a fixture generator that reads from our db and creates accurate fixtures for our adapters.
Since good code has never been “explained for itself 100%, without comments”, I employ BDD so the LLM can print out in a human readable way what the expected logical flow is. And for example, any disable of a custom rule I wrote requires and explanation of why as a comment.
Meanwhile, I’m collecting feedback from the agents along the way where they get tripped up, and what can improve in the architecture so we can promote more trust to the output. Like, I only have a fixture printer because it called out that real data (redacted yes) would be a better truth than any mocks I made.
Finally, code review is now less focused on the boilerplate and much more control flow in the use_case.
The stakes to have shitty code in these in-house tools is almost zero since new rules and rule version bumps are enforced w a ratchet pattern. Let the world fail on first pass.
Anyway, it seems to me like with investment you can slap rails on your code and stay sharp along the way. I have a strong vision for what works, am able to prove it deterministically with my homespun linters, and am being challenged by the LLMs daily with new ideas to bolt on.
So I don’t know, seems like the issue comes down to choosing to mistrust instead of slap on rails.
Edit: I wanted to ask if anyone is taking this approach or something similar, or have thought about things like writing linters for popular packages that would encourage a canonical implementation (I have seen some crazy crazy modeling with ORMs just from folks not reading the docs). HMU would love to chat youngii.jc@gmail
I have been described as a decel and a Luddite though so be weary of my opinions.
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
Why make this assumption so confidently?
The arrival of the electronic computer did not turn human computers into programmers, it simply eliminated them en masse.
Does anyone really do this? You want verification and self-correction in a loop, not rerolling and cherrypicking. The non-determinism point is really tiresome to hear over and over.
Yes, lots of people. It’s a whole issue.
When the problem is fixed, you'll stop hearing about it.
If you're afraid of cognitive decline - try to get to proper orchestration using multiple agents. That's a fun exercise.
Knowing some machining still lets you design parts and assemblies that are some combination of cheaper, better, etc. This is noticeable with precision or high performance assemblies. And how many revisions are needed.
I created a project called Ninchi to force myself to read my code and understand it. Recently I began also sharing it to see if there may be a larger need/opportunity. It's a small effort. We need to make a variety of efforts I think to encourage responsible AI usage before we end up drowning in slop.
Going against the grain here which statistically is more likely to be right given how HN was so wrong about self driving and AI being useless for coding. I think HNers given that their identity is tied around coding are of course going to defend that identity till the bitter end in the same way artists did.
This is really validating to read. I recently was having a call with a friend where I was arguing against 100% AI usage, and I was saying, some problems the LLM just can't solve. He asked for an example, and I tried to explain a complex chart I was trying to make at a previous gig, and in the end said "well to be fair neither the AI or I could figure it out lol." He replied "how could you even code it if you didn't know exactly what you were trying to build? You're supposed to know exactly what you're building before you write a single line of code, that's what they teach you in school."
He was poking fun at the fact that I have a boot camp background and he has a uni degree - it's been ten years for both of us now so he's running out of ways to poke fun at that difference as we even out our differences, but this one poke brought back about the old imposter syndrome, since my entire career, I've thought via coding.
When I get a ticket, I tend to jump into the codebase to figure out the context I need to know about, the current patterns, what files I'll need to worry about; and while I'm there, I tend to start writing some things, and as I do that I pull in a shared function, and in doing so just check out of curiosity where else the function is used, and in doing so discover oh, actually, we have similar functionality elsewhere, lemme just abstract this work for this ticket and the previous functionality into a shared function, and use it in both places. And so on. Before I know it, I'm looking back at the ticket checking if I've covered everything, and sending in the PR.
I've never had complaints about my productivity, in fact I'm often lauded for it so I think it at least hasn't been a process that slows me down long term even if it's meassier. But I had been wondering if it makes me less than a "real" engineer. I'm happy to hear others may doing it this way too.
This is a personal thought experiment so think it through for yourself. What would the consequence be if the agents really were better than you and you acknowledged that?
The major premise of "It's a trap!" is that it matters if you lose your coding skill. (I'll gloss over general critical thinking and stick with coding for now) However in the world where on any given task it would be done to a higher level of quality and faster if you gave it to the agent, then what are you doing trying to do it yourself? There's plenty of room for that kind of thinking in hobbies, but in the professional world?
Maybe you can add some value in code reviews, but you may also be better off never reading the code at all. Maybe the how of coding stops mattering and the what of products needs to be your top concern.
I can tell you that the agents that I use today are much better coders than I am in the language we're using. I don't write it at all. I couldn't fizzbuzz in it. But with a small team we are building useful internal tools and features at a breakneck pace. I certainly feel the same feelings of getting dumber and losing my coding chops, but I have to step back and say, could what we've built have been built in 5x the time without agents? And the answer is probably no.
The thing I'm mastering now is conjuring software with agents. What lets them rip, what slows them down, where they are today and where they will likely be tomorrow.
I can tell you that you should re-invest in small, modular systems, because agents can build modules and greenfield projects instantly. I can tell you that there is a point at which agents fall over completely even on mid-sized projects, but that that point is receding with each new generation of model, and that Codex 5.4 XHigh Fast set to 500K context window is a beast. (5.5 has yet to win me over)
I can tell you that pushing direct to main is viable, that PRs slow down fully agentic teams, and if your agents have sufficient permissions they can fix things fast enough to be let loose even knowing they may delete your service. I wouldn't do it with your main product yet (unless you're starting your startup today) and I wouldn't try it with a large legacy project. But maybe that rewrite you've always wanted to do is here and just a prompt away.
Now, the sane among you will note that agents are not better today, that they might not ever be, and either way you should never trust a computer to make a decision because it can't suffer the consequences of its actions. Or more down to earth, there are some things that are too important to yolo.
But I will argue that a huge swath of us work in domains where if you're willing to challenge some of the basic assumptions of software development (you should understand the code, it should be maintainable by humans, it should be built to last) then you'll be able to provide very useful software much more quickly than you would otherwise be able to do. Save the skill for your hobbies, and build things people want.
The sooner programmers start thinking about modeling the domain, user mental models, architecture and data structures and less focus on the mechanics of writing code, the better.
Writing code is the EASY part. LLMs have basically solved the easiest part of software development. They however are bad at all the stuff I mentioned. LLMs don't have a point of view, you do as a software developer.
I think many people already recognize the problem:
-“Our ability to write code is being damaged.” -“If our ability to write code declines, our ability to recognize good code also declines.”
But the problem is that the market no longer works without LLMs.
Freelance rates and deadlines are now calibrated around LLM-assisted output. Even clients who write “do not vibe code” often set deadlines that are impossible to meet unless you use something like vibe coding. The client’s expectations themselves are becoming abnormal.
That is the irony of the market.
I honestly do not know what to do.
Recent Hacker News discussions are mostly a negative echo chamber about AI use. In other places, it is often the opposite: only positive echo. But almost nobody discusses the actual solution.
The main topics I keep seeing are roughly these:
1. Is the large repository PR system failing a fundamental stress test? Or should AI-generated(GEN AI) code simply not be merged? If PR review is moving from handmade production to mass production, how should the PR system change? Or should it remain the same?
2. As vendor lock-in continues, can we move toward local LLMs to escape it? Are cost and harness design manageable? What level of local model is required to reach a similar coding speed?
3. If we are forced to use agentic coding, how do we avoid damaging our own ability to code? There is a passage from Christopher Alexander that I keep thinking about:
“A whole academic field has grown up around the idea of ‘design methods’—and I have been hailed as one of the leading exponents of these so-called design methods. I am very sorry that this has happened, and want to state, publicly, that I reject the whole idea of design methods as a subject of study, since I think it is absurd to separate the study of designing from the practice of design. In fact, people who study design methods without also practicing design are almost always frustrated designers who have no sap in them, who have lost, or never had, the urge to shape things.” — Christopher Alexander, 1971
This quote feels relevant to programming now. If we separate the study and supervision fo programming from the actual practice of making, something important may be lost.
In architecture, there is this idea that without practice, the architect loses meaning. But now the market is forcing the separation.
People with enough symbolic capital and high status have the freedom not to use AI. But people lower in the market are under pressure to use it.
So I think the discussion now needs to move beyond whether AI coding is good or bad.
The real question is How do we keep using AI because the market demands it, while still preserving the human practice that makes programming meaningful and keeps our judgment alive?
I think these are the important question. How do you maintain market value without using AI?
Or, if you do use AI, how do you avoid being treated as low-quality?
If you do not use AI, how can you remain more competitive than people who do use it?
If you do use AI, what advanatge do you have over people who do not use it, and how should you position yourself?
I know that agentic coding can cause skill degradation. I can feel it happening to me already. But for someone like me, who does not have strong status, credentials, or symbolic capital, social and market pressure makes AI almost unavoidable.
What frustrates me is that I do not see practical answers anywhere.
Stop using AI for coding. Period...there is no other solution. You can't make it work, nobody else can either. Without determinism, the entire process is useless. We need to stop trying to act like we all know that this isn't true. We have given it a chance, it failed, time to move on to something else no matter how much the VCs and execs don't want to. Those that do move on have a chance, the others have no future in software.
The market realigns, and unless you handwrite the highest possible quality at a quick pace, you won't be competitive with the vibe-coders who can fix a hundred issues a month.
It was the same with gps-assisted driving, now most people can't orient themselves autonomously. Worse, there are no roadsigns with directions installed, meaning that you are stuck with using the GPS.
That's exactly what I do. I know I am lucky to be gifted in this skillset. But that's not a good reason to excuse people destroying the market for everyone.
So while I agree with your point, it does not feel like a practical answer for my situation. For someone who is already well known and has enough reputation, refusing to use AI may be a matter of principle. But I am dealing with survival.
I do not think your answer is bad. But because this is a survival problem, it is difficult for me to risk everything on principle.
In other words, I know that your answer may be the morally correct one. If everyone boycotted this, perhaps it would not be adopted so aggressively.
But I cannot do that.
What I need is a way to use AI while degrading my own ability as little as possible, and while still preserving my skills.
I am not saying you are wrong. I am saying that your answer is too idealistic for someone in my position.
It won't do everything exactly the way you would've coded it but I find this model much better at setting and maintaining "guardrails" for your codebase so you don't find yourself wondering how it all fits together.
It's quite different.
Seems safebox went after a subset.
Agentic agile > agentic waterfall (at least for now)
Don't give the AI a spec, work with it every step of the way.
> pulls the slot machine lever over and over (link to "One More Prompt: The Dopamine Trap of Agentic Coding")
I'm sure the first cave-person to discover how to make fire was equally "addicted" to making fires. That doesn't really say anything about the underlying technology.
> An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism
I don't know what this means, exactly. Anyone have any ideas?
> Atrophying skills for a wide swath of the population
This is very real and something we're going to have to contend with. Software can't really become less complex, and there's a minimum amount of knowledge you need, with or without AIs there to help you. We may need specialized training academies for developers where they spend a few years without AI to learn to program, and then are given a few years of AI programming.
> Vendor lock-in for individuals and entire teams
This isn't really a big program, you can always switch AI providers if there's frequent downtime.
> only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem
Agreed...
> Yet, in an ironic twist of fate, it's the individual's critical thinking skills and cognitive clarity that AI tooling has now been proven to impact negatively.
...well, yes and no. AI tooling can help you _reduce_ cognitive debt. Picture this: There is one senior developer (Person A) on the team who understands Service X. Your other developers could schedule time with Person A to get an understanding. Or, they could ask the AI to analyze the project and explain it to them. This scales much better, and if Person A is a poor communicator (let's face it, many senior engineers are), it might be the only working option.