Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

75% Positive

Analyzed from 2317 words in the discussion.

Trending Topics

#code#llms#software#more#better#don#things#writing#themselves#language

Discussion (42 Comments)Read Original on HackerNews

mfro•about 3 hours ago
I think you're misunderstanding the paradigm shift completely -- AI does not just generate code N(x) more quickly. It thinks N(x) faster, it researches N(x) faster, it tests N(x) faster. There are hundreds of tasks that you'll find engineers are offloading to AI every day. The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows. This is why tool-use and agentic workflows have taken engineering by storm.
michaelchisari•about 2 hours ago
Debugging, sanity checking, testing, etc. are the best uses of LLMs. Much better than writing code.

Developers should write their own code and use LLMs to design and verify. Better, faster architecture and planning, pre-cleaned PRs and no skill atrophy or loss of understanding on the part of the developer.

jb1991•about 1 hour ago
Funny, I have the complete opposite impression after using claude code for a while. I would never trust it to design anything. Never again. But it can code pretty well given a very tight and limited scope.
michaelchisari•about 1 hour ago
To clarify, AI should not do the design itself. You develop the design in conversation with AI.

I come in knowing what I need to build and at least one idea or more of how it should be done. I present the problem, constraints, potential solutions, and ask for criticisms and alternatives. I can keep it as broad as possible or I can get more granular like struct layouts, api endpoints, etc. I go back and forth until there's an approach I prefer and then I code that approach.

| it can code pretty well given a very tight and limited scope.

It's wildly better at tight and limited scope than large scale changes but even then I would rather code it myself.

dyauspitr•16 minutes ago
They’re actually really good at both. Writing code and all the paraphernalia around it.
oytis•about 1 hour ago
The article addresses exactly this objection. Most importantly, it quotes that AI coding tools have a detrimental effect on software stability - which is basically raison d'etre for our profession. When it produces more robust software and handles on-call shifts better than humans, I will consider programming done.
tptacek•37 minutes ago
I'm excited to read the first cogent piece making this point that doesn't devolve to gatekeeping, a detached and vaguely hostile professional software developer telling people with a newfound capability to solve practical problems for themselves with new software that they don't or shouldn't want the thing that they want, because whatever it is they come up with won't be "fit for purpose" until blessed by the guild, which has bylaws extrapolated from Brooks about the fundamental "limitations of LLMs".
oytis•25 minutes ago
I am less sure about his argument about democratising software indeed. The only problem in my own life that I solve with software is a problem of getting paid, so what do I know. If someone can generate a piece of code for their needs, and they don't risk harming anyone but themselves, then it's a great application of LLMs.
ekidd•6 minutes ago
The unfortunate reality is that a lot of software does have hard constraints. And a lot of these constraints are "gatekept" by regulators, compliance policies, insurance companies, etc. If someone slops together a medical record system, and leaks a bunch of PHI, there will be consequences, even in the US. Similarly, good luck getting insurance against cyber attacks without a SOC2 audit or equivalent.

I've had this conversation with managers in multiple organizations this year: "Yes, you could totally vibe code that instead of paying for a SaaS. But you have strict contractual and professional obligations about data security. Do you want to be deposed and asked, 'So, did you really just vibe code the system that led to the data leak? Did the vibe coders have any professional qualifications? Did they even look at the code?'"

Similarly, a backend server that handles 8 million users a day is expected to stay up.

Now, there are 10,000 things that have less demanding requirements. I'm actually really delighted that people are able to vibe code their own tools with minimal knowledge of software engineering! We have been chronically underproducing niche software all along.

But if your software already has on-call shifts (and SLAs, etc) like the GP, then I think you want to be smart about how you combine human expertise with LLMs.

cfloyd•8 minutes ago
Nailed it
imiric•41 minutes ago
> The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows.

Funny, I thought that the major hurdle is improving accuracy and reliability, as it's always been. Engineering is necessary and useful, but it's a much simpler problem, which is why everyone is jumping on it.

paganel•21 minutes ago
> , it tests N(x) faster.

It does? You mean "it tests itself faster", which is not really a test now, is it?

cfloyd•6 minutes ago
I use one model for coding and another writing tests for that very reason. It’s surprisingly good at TDD
brcmthrowaway•about 2 hours ago
True. Knowledge workers are cooked.
pingou•about 2 hours ago
Not sure why you are downvoted but I agree. Additionally, perhaps LLMs are just like another higher programming language as the author said, and they still need someone to steer them.

I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

I think LLMs will be (and already are) useful for many more things than programming anyway.

smartmic•about 2 hours ago
> I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

Did you read the section "Power to the People?" ? In it, the author dismantles your thesis with powerful, highly plausible arguments.

mfro•about 2 hours ago
While I think the author is entirely right about 'natural language programming' in the current day, if LLMs (or some other AI architecture) continue to improve, it is easy to believe touching code could become unnecessary for even large projects. Consider that this is what software co. executives do all the time: outline a high level goal (software product) to their engineering director, who largely handles the details. We just don't yet know if LLMs will ever manage a level of intelligence and independence in open-ended tasks like this. And, to expand on that, I don't know that intelligence is necessarily the bottleneck for this goal. They can clearly tackle even large engineering tasks, but often complaints are that they miss on important architectural context or choose a suboptimal solution. Maybe with better training, context handling, documentation, these things will cease to be problems.
pingou•about 1 hour ago
I have indeed missed the arguments that are so powerful that they dismantles my thesis.

Would there even be a debate in the tech community if such unassailable arguments existed? The author is entirely entitled to his opinion, just as I am allowed to disagree with him (not sure why I am also downvoted). The good thing is, if I'm right, we will see it in less than 10 years.

fragmede•about 2 hours ago
> they will only get better.

I don't buy that's true. The "only" part, anyway. Look at how UX with software has evolved. This is gonna be an old man yells at clouds take, but before smartphones, there were hotkeys. And man, you could fly with those things. The computers running things weren't as fast as they are today, but you could mash in a a whole sequence thru muscle memory, and just wait for it to complete. Now, you have to poke at your phone, wait for it to respond, poke at it some more. It's really not great for getting fast at it. AI advancement is going to be like that. Directionally generally it will be better, but there's going to be some niche where, y'know what, ChatGPT-4o really had it in a way that 5.5 does not. (Rose colored glasses not included.)

dgellow•about 2 hours ago
Claude connected to a Postgres (readonly obviously) and Datadog MCP servers in addition to access to the codebase can debug prod issues so quickly. That’s easily a 10x win compared to a senior engineer doing the exact same debugging steps. IMHO that’s where the actual productivity boost is
kelnos•about 1 hour ago
>> Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

> (although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

To be fair, this statement from Brooks doesn't entirely match with the "10x programmer" we talk about. My take on it is when someone says "10x programmer" today, they mean 10x more productive than the average, not 10x more productive than the worst. Brooks' statement is about the latter. If he'd looked at the difference between average and best, I would assume you'd get something more like a 2x or 4x programmer.

atleastoptimal•about 2 hours ago
"LLM's Aren't Going to Fundamentally Change Software Development" Says Increasingly Nervous Man For Seventh Time This Year
slopinthebag•about 1 hour ago
I didn't get the sense that the author is nervous. What I tend to see are people who are nervous that going all-in on LLM workflows might not have the payoff they are expecting, and are becoming increasingly fanatical as a result.

Just one more harness bro. Just one more agentic swarm. Please bro, just one more Claude Max subscription. Please bro.

aspenmartin•about 1 hour ago
You say this as though performance has not followed a very clear and extremely rapid improvement in a startlingly short amount of time.

You’re definitely right that people adopt agentic workflows and are disappointed or worse, but the point is the disappointment has already reduced substantially and will continue to do so. We know this because we know the scaling laws, and also because learning theory has been around for many decades.

cyclopeanutopia•14 minutes ago
Perhaps you are confusing performance with instability?
paganel•19 minutes ago
> very clear and extremely rapid improvement in a startlingly short amount of time.

We're almost 6 months into all this AI-code madness and I've yet to see that "rapid improvement" you mention. As in software products that are genuinely better compared to 6 months ago, or new software products (and good software products at that) which would have not existed had this AI craze not happened.

slopinthebag•23 minutes ago
Yes but we don't know the shape of the curve and where we are on it.
dabedee•about 2 hours ago
It was a welcome change to have a deliberate, well thought, and well-written article that tries to bring readers through a rational journey. Thank you
mwaddoups•about 1 hour ago
This was a great read - thanks so much for taking the time to write this. Well researched and thought provoking. Long live the em dash.
smartmic•about 3 hours ago
If you're interested in Fred Brooks's "No Silver Bullet," I also explored it in the context of LLMs: https://smartmic.bearblog.dev/no-ai-silver-bullet/
js8•4 minutes ago
In fact, AI might be the opposite of managerial "silver bullet". The more we automate what is repetitive, the less predictability remains overall. Things can get more productive on average but the managing it becomes harder, as productivity amplifies risks.
ilia-a•about 2 hours ago
Even without writing code LLMs are a huge help, analyzing code, doing code reviews, documenting code, etc... Even without writing a line of "code" LLM hugely speed up development and take away the annoying/boring work.
trwhite•about 2 hours ago
A well researched and written piece
slopinthebag•about 1 hour ago
I really enjoyed this article, it's well written and does a good job of dismantling the flawed arguments by language model maxis' while presenting a more realistic outlook on where we are now and where we are going.

I think the biggest benefit language models have provided me is in the auxiliary aspects to programming: search, debugging, rubber ducking, planning, refactoring. The actual code generation has been mixed.

I had an LLM try and implement a fairly involved feature the other day, providing it with API spec details, examples from other open source libraries, and plenty of specifications. It's also something readily available in training data as well, but still fairly involved.

On first glance it looked great, and had I not spent the time to investigate deeper I would have missed some glaring deficiencies and omissions that render its implementation worthless. I am now going back and writing it by hand, but with language models providing assistance along the way, and it's going much better.

I think people are being unrealistic by thinking that the usage of language models in their side projects represent something broader. It's almost the perfect situation for language models: small, greenfield code bases, no review, no responsibility, and no users. It goes up on GitHub with a pretty readme, and then off to social media where they post about how developers are "cooked". It's just not a very realistic test.

In the end we will probably see large productivity increases by integrating language models, but they won't be replacing developers but rather augmenting them.

stackghost•about 3 hours ago
Let's actually not talk about LLMs.

I honestly couldn't force myself to finish yet another blog post about how "we're not yet sure what impact LLMs will have on society" or whatever beleaguered point the author was attempting to make.

"Some random person's take on LLMs" was maybe interesting in 2024. Today it is not even remotely interesting.

There are a gazillion more interesting things happening today that ought to be of interest to the median HN reader. Can we talk about those instead?

jubilanti•about 1 hour ago
I'm confused. If you don't want to talk about LLMs then why didn't you just flag the post and move on? Submit something interesting, upvote and comment on interesting posts, instead of feeding the engagement on this thread.

It sounds like you actually do want to talk about how much you don't want other people to talk about LLMs.

stackghost•43 minutes ago
Oh, I definitely flagged the post also.
mettamage•about 3 hours ago
I am an AI engineer and I honestly agree. Talking about LLMs feels like the new crypto, with some nuances (i.e. many innovative things being possible and done with LLMs whereas crypto innovations were… few and far between).
dijksterhuis•about 2 hours ago
it’s felt like the new crypto to me for about 2-3 years now.

i was doing an ML Sec phd a year or two before all this hype took off. i took one of the OG transformer papers along to present at our official little phd reading group when the paper was only a few months old (the details of this might be a bit sketchy here, was years ago now).

now i want nothing to do with the field in any way shape or form. i’m just done.

edit -- i got incredibly angry after writing this comment. pure hatred and spite for all the charlatans and accompanying bullshit.

eiekeww•21 minutes ago
Sadly investing is all about making money… you should be more pissed at the naive people who have contributed to the effort and in particular those who don’t care about truth, but about cash flow potential.
keybored•about 2 hours ago
Tedious LLM discourse isn’t aimed at AI engineers. It’s doomscrolling fodder for regular programmers.
AIorNot•about 2 hours ago
the problem with this article is that he is right of course, but only right now. There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight, yes we likely will always need a few experts

I'm reminded of this scene from the Matrix: https://www.youtube.com/watch?v=cD4nhYR-VRA where the older wise man discusses societies reliance on AI

"Nobody cares how it works, as long as it works"

We're done. I for one welcome our new AI Overlords, or more accurately still welcome the tech bro billionares who are pulling the strings

frizlab•about 2 hours ago
> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight

There are, IMHO, fewer reasons to believe they will be able to do that rather than not, though.

CamperBob2•about 1 hour ago
LLMs became much better at both reviewing and writing code over the last 12-18 months. Did you?

The current state of the art is irrelevant. Only the first couple of time derivatives matter.

paulhebert•39 minutes ago
> Did you?

I would say I got better at both of those over the last 12-18 months. Are your skills static?

slopinthebag•about 1 hour ago
> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight

Really? That's like someone during an economic boom saying "The economy is the worst it'll ever be. There is no reason to expect things to not continue to improve".

Advertisement
keybored•about 2 hours ago
I have no stake in Fred Brooks. But No Silver Bullet seemed to be taken as gospel on this board. Sufficiently productivity-enhancing technology? Gimme a break man. Maybe you’ll get a 30% boost. Not a 10X boost.

Until recently. dramatic pause

And then AI happened.

taormina•about 2 hours ago
Great! So all of this 10x boosting is visible in which economic indicator?
slopinthebag•about 1 hour ago
Debt.
gizajob•about 3 hours ago
Actually can we not thanks.
cadamsdotcom•about 2 hours ago
> If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

The article goes on to assume there’s no 10x gain to be had but misses one big truth.

Needing to type the code is an enormous source of accidental difficulty (typing speed, typos, whether you can be arsed to put your hands on the keyboard today…) and it is gone thanks to coding agents.