RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
56% Positive
Analyzed from 8232 words in the discussion.
Trending Topics
#more#don#without#engineers#code#things#software#thinking#engineering#need

Discussion (217 Comments)Read Original on HackerNews
Edit: 9 babies → 9 mothers
It's "9 women can't make a baby in one month".
It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!
we learn by doing
If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.
I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.
Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".
To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.
The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.
So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.
I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.
What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.
If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.
I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.
but i think im off on that, ill look this person up and find out!
For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.
McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."
You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.
If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.
Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.
But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code
So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.
That requires enough thinking effort.
There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.
I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.
There might be better examples that support your point more effectively e.g. cursive writing
Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.
Or without the ability to use a library from GitHub / their package manager.
It doesn't feel THAT much different to me.
"Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.
Engineers are accredited and in some countries even come with a title.
This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!
And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.
Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.
Of course I want the best of the best who are top notch and rigorously trained working on mission critical software.
"Structured, mature, legally enforced, physically grounded standards based approach to the construction of repeatable, reliable, verifiable, artifacts under stable (to the degree that matters) external constraints".
Some niche software development (e.g. NASA/JPL coding projects with special rules, practices, MISRA etc) can look like that.
99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.
Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.
Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.
Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?
In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.
"Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.
This is not a binary.
128 GB unified memory, Nvidia chip and ARM CPU for just around 3k€ net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.
And they don't even pull that much power.
Lots of people use firebase, supabase etc.
Many people's jobs are centered around using Salesforce
It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it
I’m sure you can see the difference between a garbage collector and a nondeterministic slop generator
But it feels good to equivocate, so here we are.
Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.
is doing a lot of work to avoid engaging with the actual argument.
1) you use it to help write code that you still “own” and fully understand.
2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like it’s someone else’s code if you were asked to make changes without AI.
I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesn’t matter.
I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because it’s quicker and easier. But it’s a lie. They’re mortgaging the codebase. And I think the atrophy sets in when people do this.
If all you do is point your LLM at your Jira tickets, then you are failing to be an engineer. I mean, if that's all you are doing, then who needs you? One of the most important things to learn is what the right questions to ask are and what the right decisions to make are when guiding the LLM, as well as the ability to judge the output it produces.
Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.
People might hate this but being a good cheat requires a lot of critical thinking.
The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.
It's not really that hard to get a degree in engineering if your only goal is the degree itself.
(Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.
You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.
So what does that tell me?
Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.
--
A lot of students (and developers out there too) are able to pass follow instructions and pass the test.
A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".
Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.
--
... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.
Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.
This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.
... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.
Thats why they’re relaxed - it’s just switching from one sort of unreliability to a slightly different flavour
After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)
The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…
But maybe pacing/procrastination might be relief valves?
I am doing it again using LLM. Legitimately, things that would have taken weeks is now done overnight. I still have to look at the code, at the generated C output, still have control over the architecture to make it easy for me and the LLM to work with in the future, etc
Is this replacing my thinking? I am not sure. I suppose I would have learnt a lot more about compilers/transpilers had I preserver through it for months with manual writes and rewrites but I would solely be working on this. Instead, I also had some time to write a custom NFS server support for a custom filesystem in Golang.
Why would you as a worker bother doing everything pristine? Theres no reward for you. The management of the company will fire you the day they see fit anyway. Not to mention companies tend to give higher salary raises to those who leave and later return - a true slap in the face of 'loyalty'.
“AI suggested we do it that way”
And we’ve been degrading our systems rapidly for last several weeks. We’ve decided to pause and reflect and change how we use AI on tasks that are not dead simple.
I learn so much arguing with it.
It's only your opinion that is provably false.
First, there are still people who don't like high level languages and don't use them, because they find assembly better.
Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".
High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.
I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.
Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.
And putting aside the vanishing skill, there is also an issue of volume.
Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.
I wonder if this sort of trend will continue?
(A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)
Personally, I really enjoy using AI. I have created my own cascade workflow to stop myself from “asking one more question”. Every session is planned. Claude and Codex can be annoying as hell (for different reasons). Neither is sufficiently smart for me to trust them. I treat them as junior devs who never get tired, know a lot of facts but not necessarily how to build.
I have been an ardent opponent of AI since it came up a few years back. I refuse to vibe code and I refuse to let AI think for me. I won't be an AI controller.
However, two days ago I found a nice, personal use case for AI: Advanced writing checks (grammar checks, mostly, and some rewordings) in Word using a rather expensive app.
I write a lot of US English, despite it not being my native language, and AI is now helping me to write much better than I did before. Also, I discovered that I am much worse at writing Danish than I was believing. In fact, I think I am better at writing US English than at Danish, that's a bit surprising as I am a Dane.
No AI was used during the writing of this entry, but I dearly love the writing tool already! I have heard similar stories from friends who say that AI is very good at summarizing long documents and stuff like that.
So, I personally think that AI CAN elevate one's thinking. I am learning more about Danish and US English grammar every day, now, than I did during a decade before. Writing is suddenly so fun because it involves growing my skills.
It IS a waste of time if your only goal is the creation of the plan. However, one must be very self-aware of their goals because if one of the unacknowledged ones is to retain the ability to create plans, then you must continue creating plans yourself.
‘AI’ doesn’t exist, and LLMs have vanishingly narrow legitimate justifiable use cases. Any output from one is intrinsically, explosively, imprecise, and can’t be trusted to be build upon without specialist treatment. I’m yet to identify any application of a LLM which can rationally be mistaken for intelligence.
Anyone who persists in referring to LLMs as ‘AI’ is either betraying they don’t understand what they’re talking about, or they’re invested too deeply in an active grift.
What’s the opposite of AI psychosis? Burying your head in the sand? Because anyone who could write this unironically today is certainly afflicted.
It’s no different to religions or economics.
Let’s say a person has 10 units of learning per week. Is the author actually claiming that that person must not deliver any results beyond their 10 units?
It makes some sense to have say 20 units of results and prioritize which ones to fully comprehend.
I suspect APIs / libraries / languages / platforms will have more churn due to AI. New platform new system need to learn. Once every 5 years might become every year or even more frequent. That would be a sort of inflation of knowledge and skills. It would affect the decision making about how to spend one’s 10 units per week.
Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.
This is… not how humans work? If you have the time and energy to learn ten things, and then spend time babysitting a random number generator to produce evidence of 10 more units of work, you’re paying an opportunity cost compared to someone who spends the time learning an eleventh thing. You can argue who has more short term value to a company… but who is the wiser person after a thirty year career?
shows both groups using AI differently. Hard to continue reading the article that excludes your group entirely.
"Coding in the Red-Queen Era" https://corecursive.com/red-queen-coding/
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?
IMO, teams need to agree on a set of principles on AI usage, concrete examples of where and how to use it. Perhaps its much more useful in parts of your system that's faster evolving and doesn't have too much core logic like testing frameworks etc
Simply discarding it as 'yet another tool' is part of the problem.
That's exactly what is happening now. I wouldn't even call it an analogy, I'd call it an example of where AI is already having a baleful effect. FWIW I don't disagree with the article's thesis or the examples: yes, absolutely, if used well AI can elevate engineers in exactly this way and it behooves us engineers to use it in that way. We can also say that the deliberate design of the AI systems we are constantly being exhorted to use inclines them towards work-slop and abdicated thinking.
Yet nothing has actually changed.
Becoming dependent on a technology is to be expected. I'm pretty sure 95% of us are dependent on packaged meat and don't know how to hunt.
That's substantively different than going from assembly to C.
I remember some of my earlier issues with various languages. `Dim A, B as Int`, in VisualBasic one of them is an Int the other is a Variant, in REALbasic (now Xojo) they're both Int. `MyClass *foo = nil; [foo bar];` isn't an error in ObjC because sending a message to nil is a no-op.
Or how, back when I was a complete beginner, if I forgot a semicolon in Metrowerks, the compiler would tell me about errors on every line after (but not including!) the one where I forgot the semicolon.
"Docs say", "Compiler says", "StackOverflow says", "Wikipedia says"; either this tool is good enough or it isn't; it not being good enough means we're still paid to do the thing it can't do, that only stops when nobody needs to because it can do the thing. The overlap, when people lean on it before the paint is dry, is just a time for quick-and-dirty. LLMs are in the wet-paint/quick-and-dirty phase. You could get suff done by copy-pasting code you didn't understand from StackOverflow, but you couldn't build a career from that alone. LLMs are better than StackOverflow, but still not a full replacement for SWeng, not yet.
I mean, right now we're at the stage where any user can get AI to make you software to solve very specific things - almost no technical knowledge needed.
My prediction is that first will software engineers be rendered obsolete. After that, small businesses will disappear, as users can simply get those products/services directly via AI.
For the new prompt engineers I suggest the following title:
...or as I interpret it your brain grows only when it does things that are difficult.
If you remove the difficulty, it will atrophy into a hum of a mindless chit-chat.
Engineering the data structures and control flows from scratch is a completely different than asking an LLM to scaffold them for you.
If you never walk, your legs get weak, you gain weight, your aerobic system loses capacity, and you lose the ability to walk. You don't need it, you say, because you have your car and your mobility scooter and you'll always have these things. Your crutches don't make you weaker, you can still do everything the walkers can do, you say.
Good luck with the nature hike!
I don't give a shit about this career. I don't give a shit about engineering. I despise every second of it. There's nothing to aim for other than being a drone that does whatever is asked of it.
If AI can reduce my mental workload, why wouldn't I want to delegate everything over to it so I can save my faculties for what I truly enjoy? For the art of a worthless craft?
For you, it seems that you are not cut for it judging from what you say.
So yes, use LLMs.
And I don't have the personality for running a start-up or any company, unfortunately. I'm extremely risk-averse and withdrawn. If I really had no other choice, I'd probably have to budget in a ton of... chemical helpers (stimulants).
It's changing the way we think, and reason.
Speaking as a BE focused Go developer, I'm now working with a typescript FE, using AI to guide me, but it scares the shit out of me because I don't understand what it's suggesting, forcing me to learn what is being presented and the other options.
No different to asking for help on IRC or StackOverflow - for decades people have asked and blindly accepted the answers from those sources, only to later discover that they have bought a footgun.
The speed at which AI is able to gather the answers from StackOverflow coupled with its "I know what I am talking about" tone/attitude does fool people at first, just like the over-confident half assed engineers we have always had to deal with.
Unlike those human sources, we can forcefully pushback on AI and it will (usually) take the feedback onboard, and bring the actual solution forward.
Thus proving the engineer steering it still has to know what they are doing/looking at.
University degrees certainly used to teach computing fundamentals without you having a computer in front of you.