Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

64% Positive

Analyzed from 10456 words in the discussion.

Trending Topics

#code#more#don#feel#write#things#lot#still#coding#doing

Discussion (175 Comments)Read Original on HackerNews

Rooster61about 2 hours ago
I can't relate that much to this. Every time I use AI to write code, I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code. That ick feeling counteracts the dopamine hit of having a working app after a few minutes of vibe coding, and I don't think that's going anywhere anytime soon.

That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.

ryandrakeabout 2 hours ago
In my experience, Claude only knows how to spew code. Every problem you want it to solve, it translates into "more code" rather than "less code". You have to very closely code review everything it does, otherwise your codebase is going to just grow and grow, and asymptotically approach 100% debt.

I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed.

ddesotto19 minutes ago
I think this is more a by product of the way these models are architected. “One more token” i usually much more likely than a “STOP”. Knowing when to stop and doing more with less is something also very hard for human developers.

For me what throws me off most of the time is the structure on the mid-level. It usually makes sense in the loc and maybe project level, but on the file and folder level it just loses reference on what it already has or what it does not need to be too verbose about.

notarobot12328 minutes ago
At this point, it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem. Obviously, architecture matters. What might matter less is verbosity.

The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?

I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore.

joebatesabout 1 hour ago
Same. Luckily I enjoy the process of refactoring and deleting code is nearly arousing, so I get the initial dopamine rush of wow this works, followed by the dopamine rush of "wow now this is cleaner and works so much better". Keeps me in touch with the codebase too.
pixelready43 minutes ago
Pruning code is to software engineers what cancelling plans is to introverts :)

I think I need to work up a Claude skill named marie-kondo, so that when it breathlessly presents its triumphant solution, I can go “yes, but does it spark joy?” And have it go into an aggressive refactor loop with me.

suzzer9941 minutes ago
I question any dev who doesn't get aroused by deleting code.
runeb18 minutes ago
A particularly pronounced version of this can often be seen by letting 2 agents review and code in a loop. One agent will find some problems with the code, the other agent will address the review by adding more code.

A good human developer might see that the better way to address the review is to backtrack and pick a different approach. The ai agents seem more prone to getting stuck down bad branches of the decision tree.

tailscaler2026about 1 hour ago
Of course it writes a lot of code. It gets paid per token. That's guaranteed future income every additional line of technical debt.
layer818 minutes ago
At some point they’ll introduce “deletion” tokens that cost ten times the regular token price. ;)
HoldOnAMinuteabout 1 hour ago
Periodically you can also ask it to review the recent changes and see if there is a risk-free way to streamline them.

You can also tell it to periodically summarize the "lessons learned" from the recent session(s)

embedding-shapeabout 1 hour ago
Then local models shouldn't suffer from the same problems, but they do. They just aren't trained in the direction of "less code == better long-term maintainability" I'd say, rather than some grand "increased-token-usage" conspiracy.

You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :)

enraged_camel18 minutes ago
>> Of course it writes a lot of code. It gets paid per token.

I don't buy it. I think a much more likely reason it leans towards adding code is because deleting code carries inherent risk: it can break things in major ways or minor ways or very visibly or invisibly. Adding new code, on the other hand, is a lot safer: the only parts that can break are those the AI touched inside its own working context. So it doesn't have to go down rabbit holes and potentially create bigger and bigger messes.

HoldOnAMinuteabout 1 hour ago
Here's what I do

Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change.

I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing.

Make smaller changes and check each one carefully before and after.

dvfjsdhgfv39 minutes ago
This is a reasonable approach but has nothing to do with what is being pushed on us from all sides.
wccrawfordabout 1 hour ago
I haven't used Claude, just Sweep, Copilot and whatever Jetbrains has. But they've definitely deleted code, not just added it. I know, because they have deleted code that I definitely still needed, and I had to reject those changes and start over on the prompt.
operatingthetanabout 1 hour ago
A lot of people seem to think if you give the agent a framework and clear plans that it spews "good" code. I doubt it though.
embedding-shapeabout 2 hours ago
> after a few minutes of vibe coding

Don't vibe-code, it's a joke someone coined in the moment, that somehow the industry decided shouldn't be a joke, and some people think it's a feasible way of developing stuff, it's not.

Find a better way of working together with agent, where you get the review what's important to be reviewed by a human, and "outsource" the rest, and you'll end up with code and a design that works the way you'd program it yourself, you just get there faster. I probably end up reviewing maybe 90% of the code that the agent writes, but still it's a hell of a lot more pleasant writing/dictating a few prompts over typing tens of thousands of characters and constantly moving between files. Maybe I'm just tired of typing...

Xmd5aabout 1 hour ago
I've been thinking of using Kiczales's Systematic Program Design [0]. Write the skeleton. Let the IA fill in the blanks.

[0] https://news.ycombinator.com/item?id=16563160

wahnfriedenabout 2 hours ago
There are tasks where it is appropriate to vibe code
embedding-shapeabout 1 hour ago
Agreed, whenever you're 99% sure you'll throw away the code afterwards.
stolen_biscuitabout 2 hours ago
Fully agree. I supplement my game development with AI. Anything novel or interesting I want to do, I need to write the code for myself, otherwise I'm in for a world of hurt. But for the drudgery work that is necessary to invest a lot of time in but boring to actually write, I design a clear architecture and ask AI to do the implementation leg-work. And still you have to go back over and make sure it didn't decide to just create something outlandish. A good recent example is Codex trying to recreate from scratch the behaviour already provided by Area2D in a game I'm making with Godot.

If you try and get AI to do anything meaningful, it will be riddled with footguns and bizarre choices. Maybe if you have hundreds of dollars worth of tokens that might not be the case - but for someone who spends $10 a month, it's just not worth the headache.

Besides, for me these are hobby projects and writing code is still fun, I just make AI write the boring parts (good examples: saving and loading, parsing of data files and settings menu functionality) - but I keep it away from anything that needs a humans judgement to create.

svachalekabout 1 hour ago
I'm a very senior dev (32 years exp) but I've got the process nailed down tight enough with .md documents, skills, review agents, etc, that I don't typically have that feeling or any need to do anything extra.

I don't think this makes me dumb though, I've just moved up stack. Rather than caring about assembly language or source code, I'm focused on requirements, architectural decisions, engineering process, and ever more automation.

gueloabout 1 hour ago
Every engineer to manager has the same thought but after a few years they can barely code.
steezeburgerabout 2 hours ago
Experience is so so valuable right now. We can guide these agents super well, but I do fear for the juniors as you said. I would like to think I'd use the agents to dive deeper and learn faster. It was pretty rough piecing together solutions from Stack Overflow, various irc channels, Reddit, etc. But also, I cheated on my homework in college and didn't really review the answers, so not sure. Though I pursued programming out of interest and not just to complete a degree. Maybe it would have been different. In any case, I'm glad I came into the LLM era with a lot of experience and failures already.
sarrephabout 1 hour ago
I think this is one of the key takes right now. I too have similar experience.

Which way is it going to go?

i) “Seniors” also get superseded by even more capable models that can do all of the things which currently require experience.

ii) Linguistics become the new higher order abstraction (English is the new high-level programming language) _but_ there are different / orthogonal ways of approaching software development than the way we do things now — which “juniors” become more adept at more quickly.

bigstrat200330 minutes ago
There's also iii) people realize that if the LLM needs that much babysitting, it doesn't actually add value. So they don't use it very much because it is too limited as a tool.
shigawireabout 1 hour ago
I don't think "cheating" is the right way to frame it.

A junior has managers pushing them to do more, faster. You review the code but do you really understand it the same as if you struggled through it? Do you ever build the muscle memory of what works and what doesn't?

It is the thought process that builds skills. I've seen some projects trying to be deliberate about learning from the agent as it writes to code - but I'm not sure there is a substitute for struggling and learning by doing.

nomelabout 1 hour ago
> Experience is so so valuable right now.

And probably the least valued it has ever been.

svachalekabout 1 hour ago
When the chainsaw fails the juniors, they're going to be adding wood chippers and stump grinders. The seniors are going to be out there chipping artisanal wood blocks with a hatchet. You don't need a lot of history to see who you really need to be worried about.
whattheheckheck33 minutes ago
Its not the internet that needs convincing, its the ones writing the checks
hparadizabout 2 hours ago
Metrics, profilers, architecture! Use AI to get back to basics! Wanna prove a technique is better? Use AI to make a benchmark! Learn by experimentation! That is my advice to juniors. At the end of the day AI is writing code and there may be 10 different ways to run something. Only one is the fastest in any given use case.
steezeburgerabout 2 hours ago
Yeah I totally agree! I also think people should still be reading as much code as they can. That's always been true imo. It is just hard to keep up with it now because of how much code an LLM can generate for $20/month. I do think we'll move to higher abstractions of course. We won't have to understand code as much as how the systems and components are architected. It would also be nice to use our new efficiency to return to producing truly optimized and fast software.
chowellsabout 2 hours ago
Fastest is usually the wrong metric. But you'd need experience to know that, I suppose...
cedws29 minutes ago
The code that LLMs produce is just average IMO. I wouldn’t call myself an authority on clean code but I can tell when code is well structured. I prefer my hand written code over Claude or GPT’s every time. I once did an experiment where I generated a spec from a project I’d already written, then had an LLM blindly reimplement it from the spec, and compared code. The LLM’s version looked like vomit.
therealdrag07 minutes ago
Agree, however in some cases avg code is good enough, especially when refactoring it is just a little attention and more tokens.
davnicwil35 minutes ago
I think this instinct is intrinsic, and comes from really caring about detail and wanting to fully understand it and own it.

That's what drives it, and I don't really think the extrinsic things about the way you learned (while helpful) have that much bearing on it. It comes from you and you should take credit for it.

I think if you were learning today you'd probably find have the same feeling and do just fine because of it.

movpasdabout 1 hour ago
I feel the same way, and yet I would still say I feel AI usage atrophying my thinking skills. I feel less tempted to use it to shortcut whole files, but even just using it to speed up looking up and carefully reading docs, tinkering with a library to understand it when docs are inadequate, working out the tradeoffs for design decisions... These sound less objectionable and more like simplr speedups, but when I _do_ need to do it (because the agent refuses to do it properly) I can feel the friction so much more keenly. Whether that's just me losing the habit of those specific tasks, or a generalised loss of g-factor, I don't know.
dclowd9901about 1 hour ago
I've been using it mostly to bat away yak shaving rabbit holes one can get into when working on a large and complex project. I work mostly on platform work, which is generally nebulous in its feedback loop and testing. Relegating AI to refactoring and building tools to help me research keeps me focused on solving the actual main problem I'm trying to solve, reduces context switching. I really don't understand people who use it to bat out their main focus. I simply don't trust it at that level.
gchamonliveabout 1 hour ago
> I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code

Can relate but the only thing I do different is I teach AI how to cleanup after herself in followup prompts, sessions and refining AGENTS.md. Static code quality analysis tools are also really good to keep the agent on its toes.

zackifyabout 2 hours ago
I agree with your sentiment. I've been trying to get from plan -> complete with AI and it's been working very well in a sandbox.

I am trying super hard to give the tools to validate everything to AI.

I finish by opening a draft PR and then I go through doing a deep review myself.

If I didn't already have 10+ years experience, it would be hard to learn and not atrophy with easy shortcuts being so available.

You still need people who know stuff in detail and can own the code... for now

SarikayaKomzinabout 2 hours ago
I have the same feeling on the back of my neck. I think it’s born from my crippling imposter syndrome, which is maybe a super power now.
randusernameabout 1 hour ago
I really enjoy having the AI write the spec then I write the code.

Reviewing code is pain, reviewing requirements and giving feedback feels more productive. I have to confront the full shape of the problem and I usually discover a few cans of worms that make me rethink my approach.

dualvariableabout 1 hour ago
Yeah, I'll talk out design with AI in a brainstorming session.

Then I'll usually go and implement at least one piece of that. If I get stuck, I'll ask for some help. Then, once I'm happy with it, I'll ask the AI to review what I came up with. Then typically ask it to stamp the pattern around the codebase. And often to just iterate through writing out unit tests.

So I just did this for getting dense output from interpolants for an ODE integrator that I maintain. I did the work to make Tsit5 work by hand. I asked AI to stamp out the same pattern for DP5 and BS3, because it was just gene splicing those changes into a very similar RK integrator. I can review the diffs and see that it faithfully stamped out the same pattern with two prompts and a couple of minutes.

I'm still maintaining pretty strong contact with the codebase by doing a lot of my own programming, and fighting with the design while I'm writing that first piece of it, but then I use the AI to stamp out the mindlessly repetitive stuff.

That just seemed like the obvious way to me to go about programming with AI rather than pure-vibecoding and never touching anything other than prompts.

Also, you probably run out of tokens a lot faster if you're pure-vibecoding.

Plus you should spend some time debugging your own code. Even if AI could find and fix a bug in a minute or three that would take you 20 minutes, it is generally going to be better for you to burn that 20 minutes on trying to fix it before asking for help.

Of course, unlike another poster in this comment thread, I never cheated in college and spent a lot of time on "academic" side projects that weren't part of any course I was taking.

Once the vibecoders and cheats are done spamming a billion lines of AI generated code into industry, there's probably going to be positions for people who can (with AI assistance) sort out the mess and get production stable again.

aerodexisabout 1 hour ago
Reading and writing are related, but separate activities. One's capability to write code can degrade independently of one's ability to review it.
collingreenabout 1 hour ago
Learning from code review lashings is amazing in its effectiveness and minimal blast radius! I'm glad you were able to take that in the easy way.

Scar tissue from production going down and staying down is probably powering those code reviews and I think will be teaching this wave of vibe projects a few hard lessons. I've had to learn a few things the hard way like this and it's as effective as it is painful.

I'm very pro ai-generated-software in the right context. I think being able to vibe out software as needed is awesome and could finally unlock the potential of our computer and data dominated world. I also think we haven't yet learned as a culture where this new thing is different from traditional software and misunderstanding that is where a lot of the pain will be felt.

cyanydeez40 minutes ago
I'm using a local model. The code gen is never fast beyond the first few context. As the context grows, it slows down. It's basically it's own self limiting process. When it starts doing things, the threshold of lethargy lowers and triggers me to 'do it myself'; especially, I've developed the understanding of knowledge where it starts doing stupid things and that's valuable.

There must be an epistemic problem with just how fast these SOTA models run. I don't think it's just that my local model is dumber, I think it's more that the speed of token gen trains my brain with different expectations. There's no way it'll just generate hundreds of files by itself. When it can via a opencode loop with thought files, letting it run for a day is the only way you get that.

epolanskiabout 1 hour ago
The thing is that you seem to have that luxury to be able to dig more into the problem and scratch that itch.

But the industry is changing around you fast.

If MIT-bred devs were already building crap in faang before, the trend has been getting nothing short of worse across the industry.

Expectations are rising, the field is becoming a rat race of which engineer can output the most mediocre/acceptable/good enough amount of features in the least time as possible.

Let me make this clear: you're in an increasingly rarer bubble where you have a luxury that is disappearing in this industry, plain and simple.

I have the fortune of having stellar devs around me, people that contributed to projects and software you use every day.

They are also outputting magnitude of order more than they ever did, and none of them is getting genuinely better at the craft, but it is what it is.

onlyrealcuzzoabout 1 hour ago
> I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code.

On the flip side, I'm working on stuff FAR more challenging than I would ever be able to do on my own.

My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.

AI might be making me a worse coder, but I don't care. If it hasn't "solved" coding now, I'm pretty confident it will long before my career is over. I don't have a job because I can write code - that's a small part of it. I have a job because I can get things to work. Anyone can code things that don't work (especially AI).

AI is certainly making me a far better overall engineer. Instead of spending my time trying to make the compiler happy (or fixing dynamic type errors at runtime), I can spend my time trying to solve substantially harder problems that I would never even dare try without an entire team to back me up (i.e. never).

Coding - imo - is VERY low on the totem pole of engineering skills.

I don't care if the function is pretty. I care if the system is upholding invariants and performing as expected, and there's adequate testing in place to PROVE to me that it ACTUALLY works.

High performance concurrent code has always blurred the line between sorcery and arcana... Go didn't really solve that. Rust/Tokio didn't. Zig didn't. C certainly hasn't.

It might be easier to prove to yourself, if you're the one doing all the writing, but at the end of the day, code is rarely just for you...

You probably should have the same level of proof whether you wrote it yourself and just trust yourself bro, or whether a Chinese Room wrote if for you.

I feel like I'm living in a Brave New World, and - at least for the time being - I'm enjoying it, even if it feels like I'm sprinting as fast as I can and still unable to keep up.

ferngodfatherabout 1 hour ago
> My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.

This is not a good thing. You should understand what your code does. Writing code nobody can understand is not a flex.

onlyrealcuzzo44 minutes ago
> You should understand what your code does.

It is not hard to understand what a line of code does...

It is hard to keep up with solving the problem I'm trying to solve...

thisisthenewmeabout 1 hour ago
As a developer, I kind of feel like this all smells like job security.

After using LLMs for a while, I have to admit it's pretty nice, and I like using it. I've been vibecoding a few apps, and it's a good dopamine hit to immediately see your ideas come to life. However, based on my experience, it will bite you if you trust it blindly. Even in my vibecoded projects, it keeps adding "features" without me asking for them. Since they're just pet projects, I don’t really care as long as the end result is what I'm expecting, but I don’t think companies will be as flexible. I also don't think customers would like it if features changed or got added with every new fix or update.

So this could go in a bunch of different directions from here, but to summarize the current situation:

    A lot of companies are heading in this direction.
    Without proper engineering, AI will easily write more code and potentially change the application unintentionally.
    We will have fewer junior engineers entering the market because of fear around AI and reduced hiring.
    AI usage will hit a critical point where it is making massive amounts of changes, and the people "prompting" it might start getting overwhelmed.
    We will end up with more features that people have to keep in their heads. I don’t think we can trust LLMs 100%, and because of that, developers will still need to know exactly what the application does.
    Eventually, there will be a lot of bugs, and developers will complain that we need additional human resources.
    Hiring starts again.
I think, right now, the toughest position is for new developers, and the best position is for people already in the market.
mbonnet20 minutes ago
this is pretty much my conclusion. I try very hard to teach my interns the straight and narrow.
cub-creature19 minutes ago
While I agree with the general sentiment that decentralized, bespoke solutions will explode, and require some maintenance (which may in-turn result in more hiring), I've seen plenty that still makes me hesitant to fully embrace this idea as likely. I know this is a wall of text, so forgive me while I doom-post my thoughts for a few minutes:

For one thing, the efficiency gains are massive. Bigger than any other tool, for any other price. Our company's main product is a web-app. We've been working on a re-write of our main product over the last few years. In one afternoon, I set up a new project with our desired stack, and was able to vibe-code an MVP of our product that we've been working on in a matter of hours. It wasn't perfect, of course, but I prompted feature after feature in bite-size prompts, each one taking between 5-10 minutes to complete. It looked pretty professional, and by any measure it was certainly "good enough." Given a little more time, I could solo ship and maintain what has taken us a few years to build as a small dev team. Unfortunately, this is more like a cheap "full team-replacement" rather than an efficiency-improving tool

Then there's the non-technical CEO AI hype-train. Our CEO (and the rest of our directors) have fully embraced the Claude suite of agentic tools. They're all regularly spinning up mockups, apps, and toolchains every single day. I can tell they're addicted to it, and they see the gains first-hand. In fact, while it hasn't happened yet, I wouldn't be surprised if the CEO laid off the majority of the dev team and vibe-coded the entire app himself (along with a few experienced devs). For now, they hold the view that "AI is a multiplier, not a replacer!" and in the same sentence will say "if this allows us to go the next few years without hiring again, that's a win!" I was asked point-blank why we couldn't just vibe-code our whole app. I didn't really have an answer. Yeah, there's the nice thoughts like "we wouldn't know how to maintain our app" -- but Claude would do a decent job in a single dev's hands, or "AI will potentially change the application unintentionally and introduce bugs" -- but proper observability, testing, and further prompting could fix those things in minutes to hours.

Frankly, it just doesn't make sense for companies to keep their whole dev team around anymore. No matter how many projects you launch and initiatives you tackle, the backlog will rapidly shrink, while individual dev capacity grows to exorbitant heights. Non-technical CEOs don't care about tech-debt, cognitive debt, poor software design practices, learning to code, keeping devs smart, the joy of problem solving, the art of a good algorithm or architecture; they care about shipping a product that works reasonably well, provides value, is worth paying for, and doing so for the cheapest investment possible. Unfortunately, AI fits THAT bill in nearly every single way.

I'm hoping you're right, and that the sheer volume of software being created now will increase demand. I'm worried, though, that it will never be enough to offset the massive capacity gains we get from AI.

throw4847285about 1 hour ago
We talk a lot about the risks of AI in schools, but those same risks apply in any learning environment.

I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.

Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.

That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.

xiaoyu2006about 1 hour ago
It is the worst time for the apprenticeship system (internship). everyone expect you to ship fast and good with ai, but you can barely have the time to pick up any skills during the fast iteration.
jasonjeiabout 2 hours ago
I’m not using AI to eliminate thinking but to free me from the rote mundane code writing. AI is perfectly competent at writing code once a prototype is implemented.

I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices.

When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language.

hirvi742 minutes ago
What even is rote, mundane code?

How much of this rote, mundane code do you honestly have in any given project?

comrade1234about 2 hours ago
For my current project I'm coding every day in Java, ruby, and JavaScript. I waste a lot of tokens doing what used to be simple google searches for language differences since I mix up things like the null-safe operator ruby vs jscript or what is the continue/break statement in ruby vs java. I think Claude is probably very disappointed in me that the most complicated thing I use it for is refactoring old Java loops to use more modern streams which can be unwritable for a human off the top of your head.
d1sxeyesabout 2 hours ago
It doesn’t help that Google has gone to shit though, and what used to be a simple Google search is now an enshittified embedded experience with an AI anyway.
Bolwinabout 1 hour ago
No one's forcing you to use Google. I've found Kagi to be pretty good
syl5xabout 2 hours ago
I feel that few will have the privilege to have the time to write code by hand. And let's actually see what we are actually writing, most of the time for me its nothing novel, nothing fancy its the same old create a backend for X, fix some simple bug and stuff that are trivial for a mid-senior programmer. The harder tasks are mostly (again for us) architectural decision over the code and I am even thinking of how we can develop a system where LLM wouldn't derail on feature implementations. Anyway, what I am trying to say is that you writing code by hand may be okay for now but in the future I believe the shareholders and whoever is on top of you will want you to deliver features/fixing bugs FASTER with the help of LLMs and if you can't deliver that you will under perform. So in the end it's not what we want but what the shareholder wants. Of course if you aren't drained by this you can write code by hand in your free time. I don't want to sounds like a doomer but I believe this will be very much a reality sometime soon.
lacedeconstructabout 1 hour ago
It was a never a velocity problem though, rapid progress comes mainly from designing better systems and building tight abstractions not by writing using the same primitives faster
pmg101about 1 hour ago
That certainly used to be the case.

Do you think that in 2026 maybe rapid progress can also come from using the same primitives faster?

I'm still figuring this out but I'm certainly open to the possibility.

iLoveOncallabout 2 hours ago
Everyone has the time to write code by hand, because AI doesn't yield real productivity gains.
jkkola27 minutes ago
I'm a data analyst and a bit of a data engineer,which comes with the territory. I maintain some unholy pipelines that I wrote a few years back and they were due for refactoring for a long time. I do the AI-fueled refactoring in the most basic way - paste the code, ask for suggestions, implement the ones that are sensible, ask for clarifications whenever something's new to me. The last part is absolute gold. I've learned so much with the help of AI that I think the more I use it the less I need it, rinse and repeat.

I'm at the other spectrum of what the author feels. I feel smarter and more capable with AI, and I'm actually surprised how helpful it is in my workflow. I still write code by hand but I know way more than I would without it.

Granted, I'm the "accidental programmer in a team that's completely non technical" and AI is simply a senior I'd never have otherwise. YMMV but I think if you use the tool as a more expressive Google search it can be a great companion.

Pure vibe coding is not far from "let's outsource everything", it's just a bit cheaper and more available.

esafak2 minutes ago
What a bizarre article. He laments the use of AI and then hopes that it might cause a flood of programmers.
erelong14 minutes ago
Ask AI how to make you smarter and use some discernment on if the suggestions would accomplish the goal or look up human-written articles on how to use AI to enhance intelligence
alasanoabout 1 hour ago
The main thing that was dumbing me down (and burning me out) was having to babysit LLMs on anything except basic tasks if I care about code quality/structure/maintainability.

I love coding, it always felt like Legos for adults. Not that Legos aren't also Legos for adults.

But there's no fighting the fact that we won't be writing 99% of the code anymore so I take pleasure in crafting the specs and requirements clearly, that's where I put the effort.

And then to avoid having to babysit the agents to get them to stick to the plan, I built a super robust external orchestrator that forces multiple review and fix rounds until I get the result I want.

I'll be fully open sourcing that soon also https://engine.build

fapi197430 minutes ago
I believe your words should be your own. I refuse to let ai strip my words of their idiosyncrasy. I refuse to put my words into a machine that robs them of their humanity. They are mine, they are me. Working with a human editor is an act of love and creation. Working with an AI editor is an act of mediocrity and sacrificed originality.
Advertisement
HeinrichAQSabout 2 hours ago
Understand you 100% - thats why I force myself to study maths as a "hobby" at a remote university. Its completly useless these days since I will probably never reach a level where I am better than current frontier models - but it sharpens my own mind, just by doing it. I would compare this to the same principle which applies to physical training - its not essentially required these days anymore to be physical active, still its quite helpfull. It would be dumb to not use it - since it is usefull, but its also dumb to see yourself getting dumber and not doing something against it.
otrv44 minutes ago
I find that a good way to battle this is to realize its not either the dev or the AI that should be coding at a given time. People needs to transform their workflow so the codes alongside themselves.

I am try to be coding at all times on complex issues while I am offloading a boring, non architecture, boiler plate heavye etc. task to it in the background in a git worktree.

I ask it to work in small iterations and commit every step of the way. After my coding session is done I can go back and review it's code.

WalterBrightabout 1 hour ago
Well, James, forgive me for being so inquisitive; but during the past few weeks, I’ve wondered whether you might be having some second thoughts about the mission.
nancyminusoneabout 1 hour ago
Computers read my code, so I don't mind upsetting their feelings.

But why would anyone use AI to write documents or articles? Do you really respect your recipients so little that you can't be bothered to share your own thoughts?

I might as well get an AI to call my own mother on mother's day.

mjfisherabout 1 hour ago
I think the specific case of having a long conversation with an agent about what you're trying to achieve and why, and then have it update a README or a skill based on that conversation is a useful thing to do. Captured the context of the conversation without having to essentially write the same thing again.
itissidabout 1 hour ago
Has anyone gone back to doing code katas, code craft like exercises by hand? They help keep me grounded.

Also I feel like it’s fine to let AI write your code. I felt very much like the OP did. A couple of things help keep my sanity. one is that as developers I think our job has evolved to knowing what decision an AI makes is good and which one is bad, this can be code or design – but there is nowhere a developer(or for that matter a knowledge worker) can hide from ai. In this world you will be forced to communicate with them. Partly because as a community we have decided(for better or worst) that AI should bring non trivial amounts of productivity gains to software development.

The other one is something I want to validate which is for those of us who are mediocre at coding, it might be a gift because it would free up some time and thus mind space to consider what we are actually good at.

scruple34 minutes ago
I use coding agents and LLMs at work where I'm more or less to some degree required to. At home, I write code the old fashioned way. Not katas, etc., necessarily, but I've decided that if we live in a world where code is cheap (or cheaply generated) that I'll hone my Lisp skills. I haven't used a Lisp in a few years and it's brought a lot of the joy of programming back for me, at least at home.
jonstaababout 1 hour ago
I have been telling people lately that I feel like I'm losing my mind. And I'm not even someone who has leaned into AI coding that much either; I've just tried to learn the tools since Claude got "good". But my inherent laziness, which was always flattered as something that makes me a good programmer, has made me unable to use the tools with the required discipline. The result is that I have not thought deeply about the software I write for around 3 months. Every additional week that goes by without me doing a refactor or serious feature addition saps my confidence. I know I can still code. But I feel worried that I can't. Today I am refactoring a 4k LOC AI-written rust codebase. I don't know rust, but I will finally learn it today. And I can already tell the end result will be 50% the size and immeasurably more coherent.
raincoleabout 2 hours ago
> I just caught myself about to copy and paste it into Claude to see what it thinks because I'm worried that it doesn't make sense or it reads funny or there's something missing

I unironically believe this is a very good habit. When it comes to writing, instead of starting with AI, finish a chapter by hand first then ask AI to review it strikes the best balance.

collinmandersonabout 1 hour ago
> I just caught myself about to copy and paste it into Claude to see what it thinks because I'm worried that it doesn't make sense or it reads funny or there's something missing. That's the self-doubt that it's feeding on and what I need to fight back.

This is where I'm at. I feel like I need AI to review everything.

gavinhabout 1 hour ago
When I work with Claude to plan a feature and then review Claude's implementation, I don't understand the feature as well those I developed without AI assistance. I don't recall details of the feature's behavior as well, even days later. I suspect that this is not surprising to anyone who has studied pedagogy. I've been working on applying some exercises during code review (including self-review of my own AI-assisted code) to improve comprehension and recall (https://bridgekeeper.io/). If this problem resonates with you, I would like to talk.
winridabout 1 hour ago
I don't feel this way. I have just been tackling more and larger problems, I think? This week one of many things for example is switching a multi-master KV store for tracking views on individual objects to tiered hyperloglogs that periodically merge. I could do this without AI, but it would take me a week instead of a day.

I think, if you're not feeling challenged, you're probably just doing the same work but faster. You should try to tackle harder problems, too!

h14habout 1 hour ago
I'm worse at producing code by hand, but feel smarter overall.

I've learned an insane amount in a very short period of time, and have been engaging in much more challenging problems.

Instead of "what's the right syntax for this for loop again?" I'm asking "what's the business critical module in this system and how do I structure the test suite to prove it's working to spec?"

Advertisement
VikRubenfeldabout 1 hour ago
"That's the self-doubt that it's feeding on and what I need to fight back."

Yes -- now let's talk about the correct form of fighting back.

It is not "I don't want to feel self-doubt so I will suppress that feeling."

It is, "The self-doubt is valuable -- it's pushing me to improve."

The AI is never going to be able to say what you really mean. But it may inspire you to push harder to improve your ability to do that.

Eighth2 minutes ago
I reckon you're right. The self-doubt is a signal and I can use it a tool.
temporallobeabout 2 hours ago
I only really use GH CoPilot and while it’s really damn good at predicting what I’ll do next, I find it really makes me lazier. It’s like using GPS - it’s much faster, easier, accurate, and reliable than not using it, but I have found I don’t remember routes like I used to, as if that part of my brain just stopped working. If we don’t use a skill, our brains seem to want to almost immediately reclaim those resources for something else.
Accacinabout 2 hours ago
I have a nice balance of using AI at work as a C#/TS developer which allows me to get stuff done and working on personal projects at home using AI purely for ideas when I'm stuck or not able to figure something out myself.

I personally think it can be a great tool for learning but it's so easy to fall into the trap of getting AI to do everything for you.

I've also used it for personal projects like a Chip8 emulator I wrote in C where I'd managed to run a few basic games and ran out of steam. Used AI to help me implement the rest.

dabinatabout 2 hours ago
It doesn’t have to be this way. You can use AI in ways that don’t rot your brain. You can delegate easy tasks to the AI to save time, while saving the harder tasks for yourself. Or you can treat it more as a mentor / tutor and have it explain why it made certain decisions.

I find that AI fails at things that are truly creative. I have been thoroughly unimpressed with ideas it has had or things it’s written for me. There’s still a lot of room for human creativity.

asdffabout 2 hours ago
Well, the "easy" tasks people are delegating are still leading to atrophy. Stuff like having it take over your writing. Now you feel you cannot write without this crutch. I've seen stuff pitched like AI that makes your slide decks for you. That to me is dangerous because creating the slide deck in a coherent way is imo a very valuable way to understand your project and keep on track with the story you are trying to tell about the work. I think a lot of what we think is easy or even boring has a lot of value in building up our understanding.
iamcalledrobabout 2 hours ago
It baffles me a bit that people are working so hard to replace themselves with AI. It's such a high bar for the AI to hit, and takes the creativity away from the human.

I have a pet theory that perhaps the optimal way to use AI will be more like an "exoskeleton" that turns you into a super-human programmer. Something that plugs the deficiencies of the human programmer, rather than replacing you entirely.

wccrawfordabout 1 hour ago
I wouldn't keep the "hardest" tasks. I'd keep the important ones. It's often the same, but there are differences. And I'd argue that the important ones are the ones that you most want to retain the ability to do yourself anyhow.
miltonlostabout 2 hours ago
> You can delegate easy tasks to the AI to save time, while saving the harder tasks for yourself.

This sounds a lot like "You can skip the fundamentals of basketball and just focus on dunking!"

0xkvyb42 minutes ago
it’s crazy, we’re at a point where I commit code I haven’t seen, reviewed by another AI, followed up to by another AI and it’s just kind of scary.

This thing will explode in our faces sooner or later. Also makes me feel like an imposter rather than an engineer.

Maybe that’s actually what I have become.

voncheeseabout 2 hours ago
Relatable! Or at least making me feel dumb (at times). Things that help me feel smarter are

* actually writing more on my own - created a personal blog just to get myself to write more

* upleveling my thinking - think more about problems and framing

* leverage my experience - guide (or sometimes force) the AI assistant to leverage my experience to avoid problems

* learning new things - rather than let AI just replace things I can do, I use AI to help me learn new things/technology faster than I would have pre-AI

blainabout 2 hours ago
> learning new things

I wonder lately, doesn't that all new knowledge push out the old knowledge? As in new things replace old things we know. I don't know any studies on this but do we have infinite capacity for knowledge?

What about retaining it? I catch myself asking AI wondering about random things that pop into my head, reading it, maybe using that knowledge once and later no longer remembering what it was. Maybe if you use that knowledge in practice from the get go but projects get so complicated sometimes it seems like there is not enough space in my brain for things AI is learning me.

eikenberryabout 1 hour ago
Knowledge memory doesn't really work that way, it is more like that it is constantly fading unless re-imprinted by use and learning new things is just imprinting new knowledge on top. The new knowledge will form connections with the old knowledge which will help keep some of it from fading, but not all.

Another way of looking at what you said is that the practicing the new knowledge takes the place of practicing the old knowledge. So it isn't the knowledge that is replaced, but the learning (imprinting).

voncheeseabout 2 hours ago
New knowledge doesn't necessarily push out old knowledge, and we probably don't have infinite capacity for knowledge. That being said, at least in my experience, the time when new pushes out old is when old is less useful than new.

Retaining (again just speaking for myself) requires actually using / applying the knowledge at some point within some timeframe of learning it. Otherwise yeah it fades to the point of disappearing over time.

danesparzaabout 1 hour ago
It's literally changing your brain when you don't use it like you used to. So, yeah. It is.
coldteaabout 1 hour ago
Most people who say this didn't/can't happen to them, are the worst cases...
Quarrelsomeabout 2 hours ago
Is it tho? I get paid more these days to write less code. Is it dumb to be paid more/do more, have more oversight and deal less with the minutia?

Im still concerned enough about the specifics to show concern about background refresh tokens silently failing in OAuth in a mission critical real-time system.

Im not coding it, but im still thinking it. That's the important part, ain't it? Is it dumb or just clever delegation?

malickaabout 2 hours ago
No, not really. You know what to think about because you were trained to by coding through the problem by hand. If you stop doing that, you stop learning the specifics of whatever problem domain you work with.
Quarrelsomeabout 2 hours ago
Sure, that's why we probably shouldn't start with vibe coding. Or otherwise at least learn formal methods to test against assumptions and doubly triply check vibe coded output.
ge96about 2 hours ago
Me personally if I had the money to get out of dev I would just not fun anymore if you HAVE to use AI to code instead of doing it yourself. That's the name of the game, velocity.

I like making things myself, I have self-navigating robotics projects I do on my own time, but I'm not gonna use an AI to do it for me, the joy I get is figuring it out myself.

I will use AI if I'm stuck on something or need a specific algo written that I've spent enough time on and couldn't figure out.

Quarrelsomeabout 2 hours ago
I just query about whether were talking about "dumb" or "fun". Agree with the latter but question the former.
croesabout 2 hours ago
Your thinking is based on your experience with code that’s why you do the thinking and not your customers and managers.

You lose some ideas of thinking if AI does all the coding unless you study that code. But that’s still different to creating the code yourself.

Same with authors or songwriters. Their brain is used to create stories and songs that‘s why it’s easier for them to come up with ideas.

If you are just a reader or listener your brain doesn’t get wired in the same way.

AI is a lesser problem for already experienced developers, because they just lose some abilities but new developers will never get those abilities in the first place, which will limit their thinking especially for edge cases that need creativity

Imustaskforhelpabout 1 hour ago
> Is it tho? I get paid more these days to write less code. Is it dumb to be paid more/do more, have more oversight and deal less with the minutia?

I do think that what people are being paid might get adjusted to whatever is happening.

Firstly off-shores, then now Tech companies have convenient response to lay off people and they genuinely believe that companies can be 5-10x shorter with AI and 90% of code will be written by AI.

They then push it on engineers and some adopt, some don't. It becomes a goodhart's law and people just start spending tokens to look good too and just spearhead using AI because hey 1) the corporate is recommending you to do this and then 2) the points you talked about.

The AI bill blows up (Cloudflare spends 5 million $ per month probably more in AI bills iirc) and with all of these, the company fires people off.

The amount of software engineers laid off all then try to create another AI tool (...using AI) or try to overcompete when the job market is at one of its all time low. Combine this with the overall trillion dollar and more of US stock market which is attached to the AI bubble.

I do think that you are paying a price in all of this, I feel like job insecurity is at an all time high, people are just scared of losing jobs from my understanding within this career. Some are closer to retirement than others but that's about it.

I think that nobody is that happy to be honest, the software engineer is worried about his job, the CEO is worried about being replaced or his product replaced by AI, the AI company is worried about how it would be profitable in first place, the investors are worried if they got into a bubble, the government is worried about all these people and other so distractions (think UFO files for example) and wars are happening and its successfully diverting our attention from real issues.

I don't know but I think that we are all paying a price and I say this as I sometimes feel the most over-empowered by AI, (like young guy in his teens) but I just feel like we lost something more critical along the way. We lost some senses of our humanity and peace as we are embedding this technology and just have people who are only thinking about it 24/7. I have to be honest but I do sometimes feel like I would've fared off okay without the AI thing too and I don't care about my personal gains so much sometimes but I do think that the world would've probably been net positive if AI's plateaued or were never created.

pton_xdabout 2 hours ago
We'll just move to a higher level of abstraction; thinking will be like efficiently coding in assembly, no longer necessary in today's world.
happytoexplainabout 2 hours ago
People say this constantly, but it's a qualitatively different jump from all previous abstraction layers. Previously, the part of your brain you had to use, and the way you had to think, changed from old layer X to new layer Y, but they were still very similar qualitatively. A person who was good at and enjoyed layer X either naturally was good at and enjoyed layer Y, or they could achieve both of those things after a little time. But with LLMs, the jump is much more lateral.

To do the thing I hate and use an analogy: It's not like asking a furniture maker to start using power tools; it's like asking a furniture maker to start telling a robot to make the furniture, in English. Yes, the people who were already good at furniture-making will have an advantage in how to direct the robot - but the salient point is that it's a recipe for misery for many people.

raincoleabout 2 hours ago
You should've turned the sarcasm detector on.
johnfnabout 2 hours ago
Hmm. I use AI to write almost all my code, and I feel that the "part of the brain" I use is mostly the same. Pre-AI I spent a lot of time thinking about code architecture, schemas, APIs, etc. Post-AI I spend a lot of time thinking about essentially the exact same thing. Yea, there are some things that I used to think about that I don't now - the fiddly bits, like why my parentheses weren't balanced or what field I was missing that was causing a 3rd-party API to fail. But the work feels more similar than different.
hansmayerabout 2 hours ago
Ha ha ha... actually in the last 20-30 years most people learnt programming in assembly no for the sake of building programs in assembly - it was tought so you can have a grasp of microprocessor architecture. Instructions, interrupts,registers and all that. It means being fully aware of your environment. Without this knowledge of our environment, not only in our jobs, but also generally in life, what are we? Not more than wild animals surviving on instincts and an occasional burst of conciousness. Well, no thanks - I don't want to be an Eloi.
svntabout 2 hours ago
A higher level of abstraction that doesn't require thinking? Did you mean to write thinking here?
Animatsabout 1 hour ago
Putting info into a spreadsheet is a higher level of abstraction that doesn't require thinking. There are many concrete representations like that. LLMs don't use them much. This is a lack.

Can you point a LLM at a body of code, and tell it "give me a concise UML chart of what this does"? I'm not advocating humans writing UML, but some representation like that may be useful to AIs. Except that they don't really do graphs very well. We may need a specification language intended to be read and written by AIs, readable by humans but seldom written by them. Going directly from natural language specifications to code causes the LLM blithering problem to generate too much code.

svnt32 minutes ago
I’m not sure you and the parent are talking about the same thing.

I think they were making a joke about us getting dumber that I am confused about the premise of.

You seem to be suggesting we are going to fill spreadsheets (which claude already does pretty well) and that spatial reasoning is an insurmountable problem instead of just something that doesn’t emerge naturally from training on text/code corpi.

simianwordsabout 2 hours ago
Higher levels of abstraction require more complex levels of thinking. Why do you think it won't?
happytoexplainabout 2 hours ago
The entire point of abstraction layers is that they require less thinking most of the time (and, usually as a tradeoff, more thinking a minority of the time).
ryeightsabout 2 hours ago
Reads like great satire to me.
bogzzabout 2 hours ago
Welcome to Costco. I love you.
EvanAndersonabout 2 hours ago
> ...like efficiently coding in assembly, no longer necessary in today's world.

Assembly is a stretch (albeit a few applications still need it), but otherwise that sentiment (and people who actually believe it) speaks a lot to me about what makes today's PCs slower, more latent, and less enjoyable to use than the machines of the past. Today's world sucks.

steezeburgerabout 2 hours ago
I've been thinking a lot about the new primitives and paradigms we'll see.
AlecSchuelerabout 2 hours ago
Care to share some of these thoughts?
simianwordsabout 2 hours ago
1. we will be thinking at the level of systems like services and DB's and forget about inconsequential things like methods, classes, variables

2. we will think of verification loop more - tasks will be chosen that have more ability to be easily verified

3. the concept of the difference between "generation" and "verification" will be more mainstream [1]

4. spec driven development will become more common

5. scenario testing will become mainstream

i have few more predictions like these.

[1] I wrote a blog post on this explaining why I keep this generation vs verification difference in many parts of life https://simianwords.bearblog.dev/the-generation-vs-verificat...

Advertisement
riazrizviabout 2 hours ago
It converts ICs into project managers, by default. I've been wrestling with this issue for a year.
tombertabout 2 hours ago
Yeah, I've felt like it has converted my job from "writing software" to "babysitting interns".

There are things that I think are very cool; there are lots of projects that I've sort of wanted to do for the last decade that I have pushed off because they're reasonably high effort and I don't want them that much, so being able to have a pretend intern write it for me has been great.

On the other hand, I do think that using Claude/Codex to do all the coding at work has become a little soul sucking. Now instead of being paid to do fun software work, a lot of my work still boils down to babysitting interns.

When I do get to work on projects that are interesting, it's still fun because I can justify writing TLA+, and using that as a guiding spec for my projects. The problem is that most work really isn't that interesting; a lot of it is glorified SQL queries, or CRUD, or "put thing into Kafka in one place, and take it out in another place". Those jobs can be tedious, but they aren't interesting, and now instead of even getting that, I yell at Codex to do it and I awkwardly sit and wait.

I didn't think I'd miss writing stupid CRUD apps, but here we are.

riazrizviabout 1 hour ago
I'm convinced Claude Code and Codex are not the future. The cap seems to be a 3-500 line file so I just use ChatGPT and/or my own front end to APIs on OpenAI and others including local. Much beyond that it will not do what I want. Too many expert details to get right.
tombertabout 1 hour ago
When it was just ChatGPT, I actually really enjoyed it. I still had to do a lot of the work, but I could use ChatGPT to explain arcane logs and help me diagnose errors. It didn't feel like babysitting interns, it felt more like "smarter google".

Codex and Claude have been a bit soul sucking. I feel like I'm doing less of the planning and the like. I acknowledge that most code that makes it into production doesn't have to be amazing, but I would still take some level of pride when I would figure out an interesting optimization, even for a simple CRUD app, and now I am somewhat deprived of that kind of stuff.

madrox33 minutes ago
I would argue the last 20 years of app development is what made people dumb.

During the "don't make me think" era of software design, if you wanted to make software you got really good at identifying the use case and using design thinking to optimize the paths to goal. You could make a business around a very narrow set of flows. The only thinking a user had to do was pick The App for That. They never had to think about how they want to approach their task, which is a skill in itself.

AI isn't like that. There's a million ways to use it. That's a big part of what makes it cool, but it requires the user to thoughtfully approach their workflows. Not everyone is used to doing that.

kingstnapabout 2 hours ago
I agree 100% with this article.

You need to spend time on coding without agents and writing without AI as practice if nothing else.

You should not get complacent in offloading all detail oriented work to agents.

hirvi7414 minutes ago
Perhaps my ego is preventing me from becoming too addicted to LLMs. It's not that I think the tools are incapable. Rather, LLMs are probably far more capable than me in nearly every programming metric that matters.

However, if I were to release a solution that I 'vibe-coded' into the wild, then I would feel quite a bit of shame if someone figured out that I used an LLM to write the entire thing. I know it may come off as a bit silly, but it is a feeling I cannot seem to shake. A feeling that prevents me from wanting to adopting the technology in full force because... Well, I did not truly create the software if AI did all the work. Sure, the software might have been my idea, but that does not bring me much fulfillment.

I know programming is just a means to an end, but I feel like I have put in a lot of hard work over the past decade and a half just to barely scratch the surface of mediocrity. I was attracted to this field because I saw a sense of beauty in computer science (and programming). It felt like one of the few remaining options for a creative job that was spared from the cutthroat nature of the a career in the arts.

Like the Samurai class during the early industrialization of Japan, maybe it's time for me to lay down my sword too.

weezingabout 2 hours ago
You are doing this to yourself.
dbvnabout 1 hour ago
You haven't written a line of code in 2 years and you're confused why its making you feel like you can't code?
steezeburgerabout 2 hours ago
I enjoy using and orchestrating agents a lot to build software, but have never really had the desire to replace my writing with LLMs. I don't write a whole whole lot, so maybe I just don't have enough writing to do to make it appealing, but my emails, blog posts, comments, whatever are the last thing I want to automate. Not only because it's less personal, but because I'm so tired of reading AI cruft myself. So much more text in tickets than there needs to be, for example.

And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?

I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.

projektfuabout 1 hour ago
It is making me feel less dumb when I use it to get Linux admin things done because 1) it gets it wrong and I have to help it and 2) even though I would have gotten frustrated and given up without AI it shows me that Linux has gotten way out of hand for administration. Wheels have been reinvented and conventions have been changed for no good reason, or because of https://xkcd.com/927/
marioptabout 1 hour ago
I feel your pain.

Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.

Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.

han1about 2 hours ago
Advertisement
kimjune01about 1 hour ago
using AI to red-team your thoughts and assumptions is the fastest way to get smart since the dawn of time
gralababout 1 hour ago
We need to separate our emotions from these things. I understand why people don't like AI, or are fearful of it, but we need to have good faith arguments about it. Not this. These articles are just cope.
Aurornisabout 2 hours ago
> With coding, I've been using AI entirely for a year or two. I've been entirely prompting and I haven't written a single line of code. I have mostly forgotten how to code

I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.

Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.

Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.

This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.

When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?

I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.

pplonski86about 2 hours ago
before I ask AI to write anything, I prepare a plan, I was very positively surprised when noticed Plan mode in Codex recently. It make me feel that maybe others doing the same and that's why they added it. Anyway, I start with plan, then ask AI to do just one step.

If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.

If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.

If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.

I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.

Maybe step-by-step is the approach to use AI and not feel dumber.

ge96about 2 hours ago
Use AI to fix that cert
bigstrat2003about 2 hours ago
The cert is fine according to my browser.
ge96about 2 hours ago
Fair seems it was my vpn

As far as the topic on hand, I work with someone whenever you ask them a question they say "AI says..." I'm not a big fan of that.

skeptic_aiabout 1 hour ago
If you have checked your AI would have said is not the cert
epolanskiabout 1 hour ago
I try the compensate for skills atrophy with leetcodes and kata wars, but the plain reality is that real software engineering, the one requiring you to absorb and make a problem intimate is just not there.

The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.

I am less and less fond of this work.

I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.

Every day, I'm more and more leaning into changing industry.

I love code and programming and solving product problems. But the job has changed dramatically.

If the pay+comfort ratio wasn't that good I would've done that already.

It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.

intendedabout 2 hours ago
AI use and low confidence are correlated with lower ownership and deferment of critical thinking skills.

Based on the MIT and MSFT studies.

deathanatosabout 1 hour ago
My company has 3 AI on every pull request now. They behave as follows:

1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.

2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?

3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.

I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.

Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).

Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.

economistbob19 minutes ago
I ran Qwen 14b distillation of Deepseek locally once and created a console app and recoiled in horror at how powerful those things are. The problem is that humans are not designed for default deny mental processing, and so I am not optimistic about long range effects since the hallucination aspect can never be eliminated. No one is going to check everything for errors. A single error in some spreadsheet formula or code somewhere can have catastrophic consequences for years compared to what could have been. I see the transition from expert systems to language models to be a travesty foisted upon mankind. Pushing efforts to drive such language model outputs into data wrangling and analysis is especially heinous because that affects humans' very lives and ability to thrive or survive, or even face criminal consequences.
hedayetabout 1 hour ago
also, chatting with AI makes me impatient and delusional.
zer00eyzabout 2 hours ago
God damn this nail gun is making me lazy, its like I don't have to swing the hammer any more...

Most people, given a nail gun, cant build a house, thats where the skill is...

Im not someone whose validation came from the lines of code, but from the resulting working system.

Advertisement
dfxm12about 1 hour ago
AI is keeping me on my toes. Many people in my org are experiencing the Dunning-Kruger effect after being armed with AI and are making such new and spectacular messes that I've had no other choice but to ratchet up governance controls. Improving documentation didn't help. The few people who read it complain to me when it is contradicted by AI.
marknutterabout 2 hours ago
Ai has been the best learning tool I have every used and it's not even close. I've learned more in the past year than I have in the past 5.
happytoexplainabout 2 hours ago
There are two kinds of learning: Reading and doing (and you need both). AI has been great for the reading half of learning, but has harmed the doing half of learning due to efficiency demands. We can still "do" in private, but no longer in our day-to-day.
Ifkaluvaabout 2 hours ago
Yeah it really depends on how it is used
andrewstuart2about 2 hours ago
I was talking to some friends about this over drinks the other day. I feel it has the same effects as any drug (or behavior) that triggers dopamine. If I can get a dopamine hit for lower effort AI in 10 minutes, and maybe a tiny bit better of a hit doing it myself after a day, why would my brain go for anything but AI? Especially when my DIY muscles are a bit atrophied.

And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.

dogleashabout 2 hours ago
I'm curious whenever I hear takes with your perspective.

Entering the workforce happens at an age where people have built (some more rudimentary than others) a level of understanding and self control regarding delayed gratification and Type II fun.

Did you have the kind of life where you were never really challenged to build that skillset, or is the mental stimulation so strong for you when you use AI that it overcomes executive function?

AlecSchuelerabout 2 hours ago
> Did you have the kind of life where you were never really challenged to build that skillset,

Do you really think phrasing a question like this will ever induce a productive response?

dogleashabout 1 hour ago
I guess I could have phrased it better, but at some point I'm asking about weak self control vs if the drug is that strong. The life experience thing was meant as laying down a facesaving reason that it's OK to say your willpower sucks. You just weren't forced to cultivate it. Plenty of reasons that can happen in life.

I think it's pretty normal to be able to reflect on the difference in life skills between myself and those I see in others. There are things I've struggled with throughout adulthood because through some happenstance I was able to avoid the class of challenge as a child.

I didn't learn how to study until my 20s. I didn't have will-power over eating and exercise until my body changed around 30 and I suddenly got fat, then I talked with friends that teased me for being less skilled at something than a teenage version of themselves.

What's the saying: someone who's never smoked doesn't have to learn how to quit smoking?

Imustaskforhelpabout 2 hours ago
Firstly I salute the author for saying these things. I mean we know the feeling of criticizing AI and certainly I criticize it a lot too, but when it comes to personal matters or how I am using AI, there are somethings I shy away from saying online and I wonder other people might feel the same way too.

So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)

Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.

And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.

I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.

I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.

Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's

beholeabout 2 hours ago
I feel lucky cause I started dumb. Unintentional level-up!
photochemsynabout 1 hour ago
Aggressively red-teaming your own work with LLMs is a good habit to get into. Prompts like “I’ve been told me to find the flaws in this argument/presentation/code file/etc.”. Doesn’t save any time, but is pretty educational, as long as you go back and forth a lot. It can fall into a style disagreement loop between two equivalent code blocks as it will try to find something wrong if instructed to do so, which is interesting.

If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.

I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.

slackfanabout 2 hours ago
Skill issue.
economistbobabout 1 hour ago
I do not recall ever reading such profanity in essays in thirty years of blogs. Then along comes Substack and leftists, and now people are writing curse words as if I want to read anything beyond that. Sure, they are free to write, and I am free to consider it lousy compared to what I read in precious years.
DeltaCoast43 minutes ago
The author said “god damn” once or twice whereas you just amended your comment to add a sentence containing “shit”. Not sure I understand your logic.
nancyminusoneabout 1 hour ago
"such profanity" is 3 "god damns" to you?