ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
78% Positive
Analyzed from 7500 words in the discussion.
Trending Topics
#code#don#more#industry#software#going#things#something#need#where

Discussion (140 Comments)Read Original on HackerNews
> Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
Outside of the bit on avoiding cutting corners, this advice seems like a straight path towards unemployment in a few years. The implication is that "your craft" is writing and polishing code, a skill which seems to be increasingly antiquated in favor of higher level system design. Who is going to read your carefully crafted documentation lol? The agents who replace you?
If a tree falls in the forest...
> until it is clear and elegant
New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.
Those people are going to be the absolute most dangerous possible thing you can do to a company.
Maybe some day we can just totally give up the technicals to the machine, but I strongly doubt it. Every single model is both brilliant, but also a fool, no matter how frontier it is.
Yes, the feedback loops are faster. But you need to assess what's actually technically happening. Someone does. Maybe you offload the actual thinking up the chain, delegate taste understanding and judgement to only people up the chain, and make them all go mad dealing with endless slopcoding they are being hit with. But just as bad, that junior engineer is robbing themself too. Maybe they get away with not looking, but they sure aren't going to learn a lot.
I'm missing the link but there was a great submission maybe a month ago about two hypothetical grad students, I think in astronomy, where one failed and flailed and did things largely the old fashioned way, and the other used AI to get it done. The advisor couldn't really tell who was doing what. But at the end, one student had learned & gained wisdom, and the other had served as a glorified relay between the AI and the advisor and learned little. Same work output, but different human outcomes.
Junior engineers are really not that cheap. Relative to your capabilities you are not a bargain. You take a ton of valuable time from other people. If a company is hiring you, they either are truly fools lacking basic understanding, or they are in on the bargain that they want you to be getting better, are testing to see if you can become more useful. Sure it's great to show up and have impressive output, but you need to actually be learning and growing. You need to be participating in the feedback loop actively. Or you will be lapped by people who care & think like engineers.
I hear you, but here's the thing: the companies don't give a shit about software quality any farther than it takes to keep you coming back as a customer. And it's actually been like this for a long time. They're going to hire people who can ship who-cares-how-buggy software as fast as possible. It's better for the bottom line.
And that pains my soul and pains me as a consumer (because we already had to put up with too much crap software before genAI started producing it in reams), but there's very limited money in the kind of quality you're talking about.
I hear stories from people interviewing now--the interviewers react negatively if you tell them you're working on keeping your programming skills fresh. They just want to know how many agents you can run at a time and how many lines of code you can generate per day.
Personally, I think someone skilled in software development working with genAI is going to be more productive than someone not skilled working with genAI, but I don't think that's even being selected for now.
Grim days.
The one thing that gives me hope is that every time we ask our graduates who are now in the field (and all work with AI) if we should drop classic CS education and only do AI, they all emphatically reply in the negative. Yes, we need some AI education in there, but they want the foundation, too.
Refactoring improves code organization. It makes the code more maintainable, arguably and more reusable. And, from an academic POV, makes code more satisfying conceptually by aligning it with the model of a domain more clearly and conspicuously. Good stuff.
Great. Now, in industry, what matters is the result. Nobody cares if the result was produced by a witch casting magic spells or a grunt hitting a rock with another rock. Industry is practical. It cares about "craft" as far as it enables commercial success (and yes, short-term thinking can be bad, but guess what: you need to eat in the short-term!). Maintainability is a nice thing to have, because it does allow us to more quickly develop code. But how maintainable something needs to be, especially in relation to other competing concerns, has no fixed answer. It really depends on the situation.
Practical wisdom, known as prudence in the classical literature, is the foundation of all moral behavior. The right decision, the right concern, really does depend on the circumstances. You cannot derive from principles, from the armchair, what the right course of action is for everything. The general principles may be immutable and absolute and fixed, but the way in which they are applied in particular circumstances will vary.
Academia can insulate people from certain kinds of practical concerns, which is supposed to aid theoretical work, but this demands that the academic recognize his limits. He is not in a position to pass judgement on prudential matters, which is to say matters that are not strictly matters of principle, if he is not prepared to engage competently with the concrete reality of the situation.
I also think I get doubly upset from advice like this because it’s given and marketed to impressionable young students. Even agreeing with all the moral points he’s made, I truly think this advice would set up a new grad for failure and have them focusing on the wrong skills for this market.
The bit about ignoring trends feels too head in the sand for my liking :/
Will LLMs in their current ergonomics have staying power? Perhaps. Nobody can predict the future. But I don’t think it’s a given in the least
I recognize not everyone's work is [as] important, but we should still strive for excellence (and safety.)
Programming is a practical skill, and its most common expression is industrial or commercial, not academic proofs of concept. The post addresses students who will enter industry; that's the focus of the professor's own post.
And I sympathize with many points being made here. However, the point of refactoring code is somewhat odd and detached from the real life constraints of programming in the wild.
Like, sure, in the ivory tower, you can confine yourself to nicely bounded problems and tidy little toy POCs. You can survive doing those things, because the selective pressures allow for it. I love those things, personally. They help me understand the nature of the thing. And in an academic settings, you can refine and refactor the hell out of those things to your heart's content (not that there is necessarily an objective end point to refactoring; code organization is subject to goals and constraints which can shift around).
But the reality of software in a commercial setting is not the tidy one you can expect in an academic setting. It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it? Whether you should refactor something is not just a question of whether it suits your conceptual tastes or even whether it is more maintainable. Unlike algorithms and principles and even techniques, software is not eternal. It is ephemeral. It's shelf-life is bounded. It is a piece of a larger business process. You're not refining some theory or some grasp of a Platonic ideal. You're mostly just putting into place plumbing to get something done. Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.
So, in light of that, there are actually quite absurd things to say given the difference between the privilege of academia and the gritty reality of industrial and commercial software development. If we were to force our professor into the world of industry, he would quickly lose his job or he would quickly learn that some of his strange idealism is silly and detached from the reality that his students will face.
Code that's easier to understand is easier to: maintain, generate new features for, fix bugs, onboard new engineers, etc
Code that's well written: executes faster (saving computational costs), scales better, has higher uptimes/more robust, reduces bandwidth, and so on.
The thing is the business people will never understand this. Why would they? They're not programmers. They're not in the weeds. But that's what your job is as an engineer. To find all these invisible costs.
I'm pretty confident the industry is spending billions unnecessary. Hell, I'm sure Google alone is wasting over $100m/yr due to this.
Don't be penny wise and pound foolish. You're smarter than that. I know everyone here is smarter than that. So don't fall for the trap
But disagree that this is a path to unemployment. At work we go very fast and yet I think fast is compatible with each of those points, just not in all situations.
Marc Brooker, distinguished eng at AWS, gives much more useful advice for industry, as you'd expect given his almost 30 years in industry.
https://brooker.co.za/blog/2026/03/25/ic-junior.html
I think it is a great shame that we live in a modern world where we do we must to survive regardless of how it makes us feel. I suspect it is the root of much suffering.
So maybe there’s something wrong with how we organise work?
With that said, I discovered that I’m an academic at heart after nine years in industry, though I left right before agentic coding took off. I got tired of “moving fast and breaking things,” of prioritizing shipping things and “the bottom line” over everything else.
With that said, agentic coding, in my opinion, only amplifies long-standing trends, that shipping matters more than craftsmanship. Even without LLMs, software engineering has long had a “git ‘er done!” attitude. To be fair, market effects matter greatly in software businesses. Quality matters insofar as avoiding completely unusable software, but many software companies succeed without building carefully-crafted software. Even Apple, which has a reputation for being perfectionistic, doesn’t make perfect software.
Academia has its own problems (publish-or-perish, low pay compared to other occupations that require heavy investments in education, politics, etc.), but it seems to allow more breathing room for computer scientists to focus on the craft of programming without as much pressure to ship (publish-or-perish aside).
I hope this is a pun on the content management system used to publish OP. It's forester[0], written in OCaml and parses TeX-like .tree files into semantic XML which uses browser XSLT to render the HTML.
View source on the page to get an idea.
Reminder of what the idealised web promise from decades ago was. Long gone. Very apt.
[0] https://www.forester-notes.org/index/index.xml
I generally agree with what he stated. We should clearly define our moral and technical redlines. Lines we will never cross because they will be tested every day.
There is indeed something useful about trying to write elegant code. Not because others read it. But because that's how you learn about the engineering tradeoffs and abstraction that exist everywhere.
> * Cultivate your ability to think deeply. Do whatever it takes to carve out distraction-free bubbles for yourself in both space and time. This might mean saying no to technologies or patterns of working that others say are critical or inevitable.
An entry level engineer is going to be inundated with a lot of technology they've never heard of and a lot of power structures and group dynamics that are new to them. They're not even in a position to be making these judgements until they actually learn about how professional software development actually works.
> * Be intentional about deciding your own moral and ethical boundaries up front. Don't settle for the lie of compromising your principles "just for now" until you can find something better.
That's great, but also, there are not many entry level roles where someone is going to be in a position to be making these kinds of decisions, other than avoiding a company altogether.
> * Care deeply about your craft. Refactor code until it is clear and elegant. Write good documentation for other humans to read. Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.
Yikes. A software engineering job is not a PhD program. If you are refactoring your code and someone is telling you to hurry up, you should probably wrap it up. You need to ship your code or you won't have a job.
If programming is all about making the most money then by all means disregard everything he says.
>I do not and will not use the internet, in any form, for any purpose.
And you can understand the principles governing something without knowing all the concrete particulars of an instantiation. In fact, you rarely do.
“A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%.”
—-
Given the capabilities of upcoming LLMs, I suspect that by mid-2027, most competent companies, outside specific niches, will not hire and might fire any non-senior “generative AI vegetarian” software developer.
edit, I see, a new slang:
https://news.ycombinator.com/item?id=47928885
Look at how people use LLMs these days. People frequently use it on new codebases to get up to speed on the code. Frankly because it's a lot faster than grepping, profiling, and all the digging we'd normally do (though those still have benefits and you're still going to do them. Hell, the LLMs even do them). But how much of that could have been avoided had people just taken a few seconds to document their code? No one is saying sit down and document the whole thing but "add a few comments when you add new functions" or "update comments in places you touch". If it costs you more than a minute of your time you're probably doing it wrong.
I'm tired of these arguments. People are turning molehills into mountains. It's so incredibly myopic. We waste so much fucking time on things because we're trying to move fast. But no one seems to understand the difference between speed and velocity. It never mattered how fast you go, it has always been about velocity. Going fast in the wrong direction is harming you, not helping. If you don't have the time to know if you're headed in the right direction or not then you're probably not.
But what your gripe is with is cutting corners. Not documenting? That's cutting corners. Not refactoring? That's cutting corners. Not spending time understanding the code at multiple scopes? That's cutting corners.Those are all corners cut that end up wasting tons of man hours. Sure, they save you a few precious seconds or minutes now, but at the cost of hours or days in the future.
Here's the thing, if you don't take those shortcuts, then none of those tasks are hard. Even refactoring. But as soon as you start taking those shortcuts they start compounding. Then a year down the line your company is writing a blog post about how your code is 500x faster now that it's written in rust (or whatever the cool kids use). If it's 500x faster that's not because a language change, it's because tech debt. And like all debt it accumulates little by little and it's the compounding interest that really kills you.
Sorry, I'm tired of cleaning up everybody's messes. Go ahead, move fast and break things. It's a great way to learn (I do it too!), but don't make others clean up your mess.
Stop buying into this bullshit of needing to move so fast. It's the same anti-pattern scammers use to get you to make poor decisions. Stop scamming yourselves
thinking about it a little more, i would personally prefer to use the term momentum rather than velocity or just plain speed -- we accrue more mass by adding code, features, etc. and shifting direction/increasing speed are both harder with greater mass.
Despite the common rhetoric you see in HN comments about how MBA programs only teach graduates how to cut costs by enshittifying, I actually found it a great education that made me a better engineer.
Anyway,
The best profs were the ones who'd worked in industry. One guy who taught finance worked on Wall Street and was fond of distinguishing between how the textbook taught a particular technique or fact, and how practitioners actually do it in real life. Got taught startup valuation by a guy who'd been a VC, competitive strategy by a guy who was a strategy consultant for companies you'd actually heard of, etc.
The worst profs were the ones like the guy who taught operations. He'd never worked a real job. Went straight from being a student to being a TA to a postdoc to a "research prof", whatever that means. All his examples and case studies were useless or overly simplistic to the point of being useless.
The fact that TFAuthor is concerned with polishing one's craft shows they're completely divorced from what actually happens outside the ivory tower. Typing code into a buffer has never been the hard part.
I think there is credence to his points.
Sadly, a childhood friend who teaches C/C++ at a community college where I grew up (and took said courses - not his) before college - would be a great sounding board on this.
And to the posters qualm about deeper knowledge, AI does not know nuance. It's great for a log of things...nuance is not one of them.
my uk mechanical engineering bachelors degree had a required module on the ethics of engineering which has always stuck in the back of my mind. i think we went over the bhopal disaster as a case study one week, although it was about 16 years ago now so i can't be sure.
i've rarely seen any ethics modules in computer science departments, at least here in the uk. and i think we sorely need them in general.
edit -- so i guess it's a UK thing xD though i am glad to hear that you folks in the US enjoyed your ethics modules too
'We should teach our Students what Industry doesn’t want', Kevin Ryan, https://dl.acm.org/doi/pdf/10.1145/3377814.3381719
'Are you sure your software will not kill anyone?', Nancy Leveson, https://dspace.mit.edu/handle/1721.1/136281.2
Edit: they do seem to have one now, so either I remembered wrong or they added it.
Edit 2: I remember enjoying my ethics class, we covered some of the usual examples, and also things like basic contract negotiations. But I think I still didn't register that these concerns were real at that time. It was easy to believe that I wouldn't be working on anything that impactful. This did change once I started work.
The case study i mentioned (it may not have been bhopal, but it was definitely based on something that happened in india) stands out for me because it really hit home about the impact and seriousness of some decisions we could end up making.
There was another time I remember the lecturer making a point of saying there was no single correct answer about something that caused a lengthy discussion. We would have to figure what's right/wrong out for ourselves going forward. That really stuck with me.
But when I started working and found myself doing equally cutting edge research, but genuinely for the public benefit, I realized I definitely wouldn't be comfortable with putting aside my morals like that. Maybe I didn't realize this was an option back then.
I don’t think scientists usually have mandatory ethics classes and mathematicians certainly don’t, so if it falls under either of those departments it might’ve gotten skipped!
at the very least i have a wikipedia article on therac 25 to read through now. so thanks for that!
also, yea i remember really enjoying the ethics module too. lots of discussion and not always a clear answer. was very different to the rest of the "one correct maths answer" in a lot of the other modules.
In a perfect world I think the software industry would have instilled these same virtues- software is just as (or more) capable of causing harm as poor healthcare. Yet we seem to be racing to a dystopian future at record speed courtesy of the tech industry, and our modern egalitarian societies will not survive that transition.
The only time ethics in engineering was ever mentioned to me was in a class on applied number theory (cryptography), taught by a professor who had previously worked for the EFF. He went off-topic to tell us that many problems, like how to hit a target with a missile, may fascinate and compel us as engineers, but we shouldn't let that distract us into building instruments of death.
That course was an elective, and it was entirely possible to complete my degree without hearing a single mention of ethics.
There are many reasons I look back on my academic experience with disdain, but this one stands out to me.
Pretty good experience, too! Sometimes got distracted with general tech ethics rather than strictly professional ethics, but tbf that’s a very fun+timely topic
A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%."
I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model.
Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587
Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles.
UPDATE: Huh, speaking of potentially vegetarian models, I just saw https://talkie-lm.com/introducing-talkie on the HN homepage https://news.ycombinator.com/item?id=47927903
I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/
Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities."
I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.
I suspect that even if you reduced the cost of training or any other real world metric, the goalposts would immediately move. It seems to me that it has never been about those things, but simply about the feeling of superiority one can attain by eschewing something seen as trending.
* real programmers manage memory, it's a craft
* real programmers don't drag and drop
* real programmers don't use intellisense
* real programmers don't need stack overflow
* real programmers don't tab-complete
* real programmers don't need copilot
* real programmers don't use llms <- you are here
This kind of hyperbole repeated ad infinitum by haters online is not-constructive, IMO. I would be quite certain that the manufacture of whatever computing device the author is accessing the internet on used far more resources and exploited far more human labor than training an ML model ever did.
How constructive are ad hominem arguments?
The first general purpose, programmable computer was designed in 1945 to calculate artillery firing tables for the US Army and was immediately used to help design nuclear weapons. Computers and all technology has always been, and will always be, used as a weapon (either directly or indirectly).
I find that when I get back into exercise and reading so much more of my life falls into place. These are things that I never have enough time for until I start doing them regularly at which point I realize that they actually enable me to have more time to do things, not less.
http://ozark.hendrix.edu/~yorgey/forest/009L/index.xml
* Monoids: Theme and variations (functional pearl): http://ozark.hendrix.edu/~yorgey/pub/monoid-pearl.pdf
Currently struggling hard to achieve this. We all know everything fights for our attention nowadays, but I can assure you that you don't have an idea of the degree this happens until you actively try to fight it.
Especially relevant for students I think, since they are hurting themselves most by relying on LLMs. Just like how young children are forced to do math by hand instead of using calculators to build intuition and memory, students should aim to do things manually to build their skills.
Go make that toy website, game, OS, emulator or programming language. Read specifications and try implementing them yourself. You aren't in an environment that requires you to churn out features, you can explore!
But the real world and money blended in creates a weird corrupt mix, just like everything. Not to mention there is a real risk for people who are already has their feet in the industry but not yet senior enough to survive or to control, for example, the AI replacements. And more than likely, the seniority required is way higher than one would think. In the end, economic drives are the dominant forces.
It's important to distinguish between the practical and the theoretical. The flippant answers of "idealists" refuse to engage with the messy domain of facts, because it is aesthetically offensive or challenges their comfort or their nostalgia. The steam engine wasn't inevitable either, but people did choose it. How many today in this forum grumble about the loss of a world when the steam engine replaced old ways of working? The next generation won't have these sorts of hangups, just as we don't have them about steam engines. Or, if you like, how many pine for the days of assembly programming?
When something proves to be too useful industrially to opt out of, then it will be adopted. People will choose it. If you want to be Amish, go for it, but most people don't.
It was important to say, but I very much doubt there was any courage involved.
He put his name and career on it. That takes courage in my opinion.
Build your own job-portable software libraries. Yes, you might need a lawyer.
Start now.
Not everything is about making money anyways.
In the case of present-day LLMs, the vast majority of the public finds them to be more harmful than beneficial.
Why accept a decreasing quality of live instead of sensible regulation?
Examples of ridiculous and incorrect beliefs once held by majorities:
- Spontaneous generation
- "Miasma" causes disease
- Earth is at the centre of the universe
- The heart is the seat of thought and the brain is useless
- Cold weather causes colds
Don't trust "the vast majority" to get anything right, ever.
This suggests to me the underlying concern is "but I won't get paid for my craft!".
Hell hath no fury like a vested interest masquerading as a moral principle?
Why not encourage your students to be curious about emerging technology, and to engage with society as an informed citizen?
This reeks of political activism, and it’s reminiscent of the general BlueSky-esque tone of the Correspondents Dinner shooter’s manifesto.
lol.
We millennials are in a position to start giving advice the way boomers used to do with us, now that school is looking more like a couple decades ago instead of just one.
But, unlike those boomers, we don't watch the nightly news: we snort it from a tiny screen all day long from sources hyper engineered to feed off our anxiety.
So we give all this super pessimistic advice.
"Back in my day, I got a job at google right after college and it was awesome! My code was elegant! You guys are FUCKED!"
I agree that AI is creating mega changes, many very bad, but that doesn't mean that it's a good idea or even true to tell GenZ people they're fucked. We don't know if they're fucked.
I think they could have a ton of fun with software and I think it's OK to be encouraging about that.
From an information theory perspective, LLMs are just regurgitating content from a loss-ily compressed training set.
It just turns out that like 95% of software we write is extremely repetitive rehashed shit globbed together. We just haven't found ways to abstract a lot of the redundant code well enough yet so here we are, stuck with the stupid robot.
That remaining 5% is stuff that's fully never been done before. If you ask an LLM to come up with a fully new sorting algorithm it's going to give you worthless garbage, maybe it'll get lucky if you burn a nuclear power plant worth of tokens in an infinite-keyboard-monkeys way.
All this is to say, if we want the field to actually progress we still need somebody with some knowledge about how a computer actually works.
The author is getting some grief in this thread from the Eng side, but I’d like to add a bit of grief from the direct opposite side: the philosophical one. It will never not baffle me to see academics assume they are the first people to ever think about topics like ‘what if technology was used for ill’!
I don't think he believes he is the first or only one to think this. He is just safe enough or at least hopes he is to speak out against the ills of technology. Do you know how many engineers cannot speak up right now for fear of losing their jobs? Lots.
I've been struggling to figure out what "slower" would look like when working in industry. If everyone's working 2x faster, how do you slow down meaningfully without getting axed?
As I got older and more experienced, I didn't produce code faster. I just produced the right code. If you don't have to try five different things, and debug them along the way, you can be a lot faster without "going fast".
I've even seen a guy spend most of his work hours as a mentor even though his title was something like senior engineer. If anyone fired him that company would tank so fast...
After getting my CS degree I deliberately went into a sector where I suspected this kind of attitude doesn't exist (defense in my case) because already then I felt the whole web/startup culture had very little to do with software engineering.
Just get it to work reliably the cheapest and quickest way possible. This ‘craft’ stuff is just too much.
And while I don't have a problem with career instructors/academics generally, they can be so dramatic. :)
I have no doom and gloom at all for my IT students. Opportunities and crises really are the same thing in the real world; I just tell them, just learn and enjoy learning the tech and keep an eye out for how you can be a problem solver.
You'll be fine.
We need to discontinue the H-1B visa and have Americans programming again. Americans who are empowered to push back when management crosses an ethical line.
It’ll be interesting to see