FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
61% Positive
Analyzed from 4835 words in the discussion.
Trending Topics
#code#more#don#coding#software#before#using#every#feel#less

Discussion (93 Comments)Read Original on HackerNews
And it is great. It does produce fixes, produce a facimilie of understanding. It answers my questions, and is often right. And tinkering with the process of it is satisfying. Integrating more and more data, writing better specs, you can get better results. Its tempting to think that it could be sustainable, this way of working, but also so scary to lose the understanding, to not have the confidence in how things work. Finding duplicated stacks using different libraries, or even the same library, is becoming more and more common. Even our debugging tools, our tracing grow fragmented and unstandardized.
I liked the old way of working. It was fun for me, if often frustrating. It was solving hard sudoku on the train. This new way is lower friction, but more stress. It's steering a rocket ship using chopsticks to hold the wheel. You desperately want to slow things down and work methodically, to be sure, and safe. But you won't get anywhere near as far if you do that.
Somewhere quiet, the tech debt demon smiles.
Same - literally found a re-build of a library feature for use with the library the other day (e. g. MyCustomFooProviderFor(Bar) but Bar already literally has a `.foo` method.) No, it didn't need to be there.
How long have you been doing this?
Are you at a product company, a consultancy, a place where technology is an enabler but not core, or somewhere else?
What happens when there are bugs or an outage due to that 3k LoC PR?
We're at a product company, not a consultancy. Hard to say exactly about tech, the tech is namely the product, but its b2b, so massive contracts move like glaciers, customer purchase decisions are often as much or more about the claims we made as the reality of the code.
As for outages, its the same as it always was. We have our testing, in layers. unit tests, integrations, e2e, staging envs. Layers and layers before it reaches the customer. If there ever is something that reaches there, as has happened, its so hard to pin the blame on AI, and of course we run a blameless culture here anyhow. Tickets are assigned, emergency patches are made, and the behemoth lumbers on. I don't think AI makes our defences much better though. We catch the things we always caught, and miss those we always missed, in greater and greater volume.
I don't pin blame on stupid management or whatever, I think this is complacency rather than a specific effort to push ai, as some claim. AI has just made it easier to work on more and understand less, and this is the result, no external intervention needed. I don't have a solution other than observing that trying to stop this is fighting the tides. People used to hate working on legacy codebases, where the original developers werent around to explain themselves, now everything is a legacy codebase right from inception - even if you personally don't use ai, the job is fundamentally different.
This has not been my experience. Sure it feels like more work to fix the AI code problems sometimes - it is a different skillset than writing code from scratch. But the speed that I can deliver software has significantly increased by using coding agents.
>But the speed that I...
I agree with the parent; I'm able to produce more. And with proper documentation and unit tests in place, I don't feel I need to review every line.
But the fact is this is not how it is. Every competent developer I know is delivering significantly more after being AI enabled.
Anyone seriously using the tools without a chip on their shoulder is going to say the same.
Are the tools delivering perfect code 100% of the time, no, of course not. But that's the new skill. Guiding them so they deliver good enough code at 5-50x the velocity. As the models improve and the ecosystem tries out new workflows, the skill changes and the output gets better and better.
What we're capable of delivering now is incredible and would have been unimaginable just a few years ago.
https://june.kim/speedrunning-open-source
> tinygrad I picked on purpose. geohot narrates rejections in public, and a narrated rejection is data; a silent close is noise. Thirteen PRs, one merged, twelve closed. His comments tell the escalation story:
>> be careful with AI usage, we never trade complexity for speed
>> You need to stop with AI PRs, you will be banned.
>> Last warning about low quality PRs before I ban you from our GitHub.
>> I don’t even understand what this does. I’m not reading anything written by AI
> Each line a little more done with my shit than the last.
> Some of those PRs had real bugs with real fixes. The MATVEC pattern rejected equal-range elementwise reduces, a genuine correctness issue. But by that point the maintainer had stopped reading code and started reading provenance. “We never trade complexity for speed” is a valid engineering principle. “I’m not reading anything written by AI” is not.
> I went there for maximum surprise and got it. He had a review queue and a quality bar to protect; I had a clanker and a question. The price was his afternoon, three warnings, an account ban, and real bugs left unfixed.
Because this is Facebook-level "let's make people angry on the internet and see what happens" levels of treating people as if they were means to an end rather than an end in themselves. And you should stop.
Now if you mean generating some one off script or playing around with a prototype in some area you don’t know then I can see more like 5-10x but these are typically not the bottleneck for shipping software.
Biggest problem there isn't delivering the code, it's coordinating. Old problems are new again.
When every developer can now deliver 10,000 line changes in an hour, you have to be very tactful about how people carve up the code base to work in it.
We can't even decide if type systems have made us more productive. It's barely been studied. Same with test-driven development.
What it sounds like we'll see, from your description of AI-enabled developers, is a commensurate (perhaps linear) increase in the rate of errors reaching production systems. Every line of code is a liability. Now everyone has a fire-hose they can aim at a production environment.
At least time and effort prevented some bad ideas and potentially bad code from reaching production.
I'm sure the platforms providing these tools are going to be happy with the results when every business writing code this way becomes dependent on them and have no exit strategy. The prices increase, the service gets worse, and you're locked in. Sounds real productive.
We have been using Copilot for a year or two but it's not required. Any developer who asks for a license gets one. So far I haven't seen anyone get to the point of prompting it to write entire features at a time.
Huge problem with this is the rate at which anyone can take accountability for the code produced.
Of course you can let AI do reviews, but my experience so far is that it's, broadly speaking, not working.
:blinks: You are producing in a week what used to take you a year?
But there are definitely many tasks that used to take a very long time that now take almost no time at all, and that can be delivered in parallel with other tasks.
The unfortunate fact is that your boss or your customers never cared what your code looked like. They just cared that it worked bug free.
The craft will live on, no doubt, but the fact is that we're in the age of industrial programming.
Spending too much time twiddling line spacing, abstraction names, and dialing everything in just so is now for fun and not for profit.
Although to be honest, AI enables you to do that at scale too. It's never been easier to rename or refactor tens of thousands of lines to your hearts content. Even twiddling is accelerated.
> What we're capable of delivering now is incredible and would have been unimaginable just a few years ago
What I mean is - are there concrete examples, real world "things" that came from AI programming, that are incredible, and someone can talk about and point to how AI led to the thing being possible?
Over the next few years, every piece of software everywhere will be in part AI written.
There's not going to be anything to point to because it's everything.
We've had large applications released by big companies before AI.
Windows 11 existed before Microsoft started relying on AI to contribute to the codebase. What incredible things have been added to Windows 11 now that Microsoft is using AI to write it?
Maybe I am not "competent" developer, but the point has some merit.
Even if I'm reviewing more, I built the feature without even opening my editor.
My workflow is:
1. Plan mode 2. Read thoroughly or skim if its an easy task 3. /draft command that puts a draft PR on github 4. Review closely then send to team
With AI I can build. I'm having so much fun turning ideas into code. I can do a week's worth of work before lunch. I can ask AI to add comments so detailed that my code becomes a refresher tutorial.
It's so exciting to be able to bring my ideas to life, make use of my experience, and not be hobbled by my somewhat atrophied hands-on coding skills. I for one welcome this revolution.
I'm always seeing these "I can finally make my projects and slack off at work!" but I just can't help but feel like people aren't thinking about what comes next
1. The software is simple because lowly humans wrote them and debugged them and maintained them.
2. The humans are competent in software engineering.
3. All of a sudden we now have help from AI.
Point 3. is here to stay, but 1. and 2. could disappear.
Probably because they mandate its adoption. And while there are plenty of developers who will happily comply and see it as a good thing. There are others who will do it because they have to or risk losing their jobs.
It's a bit of a silly thing to claim. "We made everyone use it, so they did, and now adoption is going up!"
But now with all the vibe coders and agentic coding, I pretty much lost a lot of the interest. I sometimes receive PRs to review of thousands of code where it's clear that they were from some AI and never even tested to begin with, why should I as a reviewer do that for you? If you want to use any AI, at least make sure that it works as it's supposed to, since I'll already have to go through all the code that you didn't write, and likely didn't even read yourself.
Then similarly when I have to build something I sometimes use AI, but it's like cheating, and reducing my coding ability, I can already feel that. But at the end I think that the business just wants that, so I use that, and start to care less about the output, the quality, the whole architecture, so thanks to AI I'm putting less effort, I let AI work for me, while I do other things. Maybe that's the way
,,you can outsource your thinking but not your understanding''
There's just no way to not generate much more amount of code with LLMs than we would do as humans, so well structuring code gets much more important than ever before.
The skill is in making the LLMs reliably generate useful and pertinent streams of tokens. That takes work, reading the output, intuition, experience, rigor, real commitment to doing good work, not fall prey to being lazy, etc.
And I used to love my work :(
You can also use it for regurgitating manuals, but generative AI for coding is counterproductive. Only the tool and gaming addicted people like it and pretend to be more productive, for which there is no public evidence. I don't see any software improving at any faster rate.
I’m curios, which models have you used? I’ve been using Shmopus 69 and it’s out of the world, it’s so good that I don’t understand how people existed before it.
Today I learned about a more elegant helper method in Apache Commons' StringUtils library for Java.
The function was `trimToNull()`
Normally I would have just done
Now I can just do responseDTO.setFoo(trimToNull(foo));I had written the original code, Claude suggested the improvement.
I enjoy shipping code and reviewing what Claude writes.
To add to that, what I find most helpful is the boring stuff, the JIRA cleanup, trawling Wikis and other sources to find out what the historical context of something was.
Normally that would take me all day to do, with a 30 minute code change.
Now I can do that in about 15 minutes and think about building or shipping some tool which I never had time to do.
Just yesterday I was interviewing for a very interesting job and I completely flunked the coding question in an unacceptable way for my level of experience. The question was easy, I just couldn't get past some syntactic issues. For 8 months, Claude wrote all of my Python classes and Pydantic types. Now I had to write a dataclass, and because I always just resorted to standard classes before the advent of LLMs, I stumbled. And froze. And panicked. And that was it. Of course you could say I should have just scrapped the dataclass and written it as a simple class. The point is I felt very, very stupid. LLMs suddenly felt like a huge disadvantage.
All this to say I disagree with LLMs "rotting" my brain. Quite the opposite, I know that it's possible to use LLMs to be efficient and correct. It's more the actual mechanical act of writing that gets rusty.
I got into software engineering because I was always fascinated by getting computers to do stuff, and I really enjoyed the manual task of programming. It's been a dream to earn a living doing something I would do in my spare time. I was pretty good at it too.
I'm not having fun any more, so I've decided to leave the field and become a teacher. I won't earn nearly as much money but I expect to feel more fulfilled, and I hope I can help make a difference to some young people.
I've had an extraordinarily privileged career, and many people never get the luxury of enjoying their work at all. But I'd rather try to enjoy what I do day to day than persist in something that's lost its spark.
Huh? What did you mean by that?
https://archive.is/2vjJm
You’ll do the work of 10 people and be happy, now you’re all 10x developers for 1x pay, rejoice!
It seems like they're overgeneralizing quite a bit here and focusing on a narrow subset of the population while ignoring the people who are actually thriving with their new AI-enabled dev workflows.
LLMs are not a panacea by any means and they have lots of cons. But I for one would find it difficult to go back to a world where I can't lean on LLMs in my day-to-day.
One very specific example that could not possibly contribute to the brainrot mentioned in this article: AI saves time and reduces the headache of having to pore through pages of documentation (if there even is any) to find how that one method works or what arguments it can take. This alone is immensely helpful and can keep you in a state of flow instead of sending you off on a potentially fruitless side quest that derails your whole train of thought.
It's also taken me quite a bit of time, effort, and experimentation to find the right tools and the right ways to work AI into my workflows which I would bet that the developers mentioned in this article have not explored too deeply if at all.
Claiming AI is rotting your brain because you can't one-shot an entire app or even a single feature is a straw man fallacy.
Thousands of lines and hours of wasted time and this was the lucky path because a DIFFERENT human happened to be in the loop and asked the right question.
This isn't a general claim about what ai does or doesn't do, but it is a real life anecdote about a very well paid professional.
Someone posted a great quote above that you can outsource your thinking but you can't outsource your understanding.
Sounds like it was the AI that decided this and the engineer didn't bother questioning it which I'd classify as using AI incorrectly. AI is a smart intern, not a smart engineer.
There's always pros and cons but this article sounds like "it's only cons!".
It is difficult for me to read this and believe your brain isn't rotted
If you've become reliant on it, then your skills have atrophied. Your brain has rotted.
To your point, my documentation reading skills have certainly atrophied.
I'm not coding as much so my coding skill has likely atrophied to an extent.
It does take intentional effort to counteract this, which is why I will force myself to write code by hand still. Or why I carefully parse PR diffs and will not approve it unless I can explain what it's doing and why it's doing it that way.
I was careful to explain that there are certain points in my workflow that I leverage AI to great benefit. There are plenty of points that I do not trust it and must be the HITL, generally around exercising judgment or course correcting when the agent has gone off the rails.
I can understand why you would jump to such an assumption though. There is nuance to everything.
> Should we no longer drive or take public transit anywhere
Yes? It’s bad for the environment and living closer to the things you do is better for everyone. But not only that, but this has absolutely nothing to do with being a skilled professional. Is walking or running your job?
Yeah there’s nuance to everything.
Yes, obviously.
Have you heard of the obesity crisis?
https://www.who.int/news-room/fact-sheets/detail/obesity-and...
Yes, our diets play a big role here too, but our sedentary lifestyles, which includes driving everywhere or taking transit, surely is a factor.
So.. yes?
Experienced mental pains I never felt with any other activity except watching tiktok reels for hours.
Got into points of no returns on numerous side projects, ai slop neither ai or myself could touch.
I've developed a better mental loop. I simply review every lines of code it spits out, and refine the loop to get less code produced. But always demand the full file again.
I commit each change. And inspect the diff for review.
I don't feel drain or pain.
LLMs still aren't standalone developers, but they can be tamed to execute well on well defined scope. If we review what they do, every time.
I have also worked in customer support for some time and I have found that huge problem for some people (often times developers) is that they are lacking theory of mind. Like they literally can't comprehend that I don't see into their heads and they need to articulate their question with correct context otherwise I can't help them.
AI is like a litmus test for it. People who have theory of mind, are capable of putting together a question which will give them good results out of AI. On the other hand people who are struggling with the fact that AI can't see what you mean unless it is in a context window will have bad time with it. These people also usually suck in managing other people because - once again - they are unable to provide tasks with enough context and properly set boundaries. At best they will give you some vague poorly defined tasks and get mad when you will do it differently than they had in their mind.