ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
70% Positive
Analyzed from 4147 words in the discussion.
Trending Topics
#more#tech#code#don#never#job#tool#companies#things#where

Discussion (73 Comments)Read Original on HackerNews
I feel like this is a very profound insight.
Of course processes like this can become about the immediate utility. Reviewing is then checking work so, it can be merged and used.
But the process is more about us than the code. And we lose the deeper part when we only care about the superficial one.
AI is all about losing every possible bit of friction, severely underestimating the value that friction brings.
That was pretty widespread during 2005-2015, but it's been dropping extremely quickly now.
Developers are generally seen as replaceable cogs. Middle management loves to talk about "scaling" - by which they don't mean scaling how devs understand it, but instead multiplying headcount - because surely throwing x-n devs at the same software will multiply the velocity by the same factor amiright?
The biggest value you can get is by having a very small team of extremely capable people (with extremely high bus factor) being fully in control of everything they do.
Realistically speaking, that'd be impossible to "scale" in the perspective of an MBA however, hence the industry at wide doesn't to that.
You may notice that some employers do, however.
You're just unlikely to get a job there, because their team is already established.
Honest to god, were the programming job market like 5% better than it is (so, y'know, years away) i would already have quit. I've been applying places but it's a slaughterhouse out there. I got ghosted after a fourth round interview at a non-tech company over the winter.
Shit sucks.
I'm immensely jealous of the author; i have savings as a safety net, but not enough to take a year off work. But this next year of my role is guaranteed to be hell and the last year of applying for jobs has not been better.
There were fakers before, and there will be fakers after.
I agree there is some value in AI tools, but implementation details do matter. People shouldn't be pushing unread code to prod. That's how you end up with security holes and other bugs. That's how you end up dropping millions of orders on Amazon.com.
And major corporations certainly don’t seem to care that much about leaving massive amounts of money on the table from jr level tech issues. I see it all the time. I mentioned a few from Walmart, Meta, and Amazon recently.
Everyone talks like these things matter, but the results say everyone is just playing pretend.
Experimenting with speculative uses is fine, technological breakthroughs require lot of iterations and some would naturally never make it but with the enormous amounts of capex that companies are investing, these have to impact the top line and eventually the bottom line as well. I just don’t see that happening now, I could be wrong.
1. To me speculative uses of AI like meeting notes summarisers seem to add little value if at all. First off, most meetings are performative work especially at big companies. Add to this, when someone just casually pastes the meeting notes from an AI summary and asks the meeting organiser to “pls check for correctness”, my blood just boils. Are we spending billions of dollars of capex for this ?
2. Every team builds their own “agent” for diagnosing incidents which is announced to huge fanfare but people rarely end up using it irl.
3. Devs and PMs chasing “volume” of work. You prompt GPT for an issue and it is bound to give you pages of text that you can use to show how much of output you can churn. I have seen excessively verbose design docs that only the writer (and prompter) could understand and all this was accepted because “Hey, I used AI for this and it must be good”.
There are legit uses of AI and I do have a 20$ Claude subscription which I like and use but at big companies they are shoving AI into every nook and cranny hoping it shows up in the top line and bottom line and so far it doesn’t add up.
Lot of these uses are driven by fear, by repeated exhortations from upper management about shoving AI into every nook and cranny when they are just as much clueless as us. People’s mortgages, their children’s education and their retirement, in short their whole livelihoods are at stake even more so when companies will happily lay off workers without a second thought. So people have to use AI even when it adds questionable value, if at all.
I am not resistant to change and am not an AI Luddite. I am happy to use AI to become a better developer but most current use cases seem to add questionable value.
“ Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care? “
That’s ok! I was fascinated by coding when many others weren’t and found a great career as a result. A different cohort will love Development 2.0.
I had a "good" job, it was extremely stable and in the public sector, the work hypothetically mattered... I was miserable because it didn't matter. If I would have died in my study, the system would have happily churned on accomplishing nothing without me. There were so many many obstacles to accomplishing anything too, like I'm all about "perfect shouldn't be the enemy of good" - but hypothetically we should do something. I went on vacation in November and when I got back the latest ServiceNow update nuked a bunch of the changes I had worked for months trying to get done.
I quit at the start of the year and honestly, it's been great? Not fast, not suddenly lucrative, but I've been taking it slow. I'm literally building little vibe-engineered tools for local companies. I can now do what would have taken me a team to do by myself, it is paying (albeit slowly), it's fun, and I have time to do the things I care about in this life.
Don't work for the man. Your job cannot love you back, in fact, it actively hates you.
I want to zoom in on the rise of AI notetakers. AI that generates transcripts alongside recorded video that you can watch later? Amazing. I can catch up later and find people asyhc if I need more info; the videos are discoverable/shareable and anyone who needs to be in the know can be. AI notetakers that give you a summary and nothing else? Useless. These generat concepts of overviews and tend to miss small, but, key details.
I'd rather (and often do) take notes manually than turn on the notetaker.
In the long run, strong senior specialists — in design, development, and other IT fields — will likely be more valuable than ever. Meanwhile, those who rely entirely on AI without developing fundamentals may never reach that level.
AI isn’t really capable of creating truly complex solutions or top-tier UI/UX — it mostly recombines existing ideas.
So it’s probably better to focus on your craft and avoid burnout — that’s what will matter.
But there‘s something psychologically powerful happening with the interaction of AI. I think we overestimate our ability to be rational and underestimate how essily influenced we are.
20 years coding experience. Gone through the sweaty junior years, senior, founding engineer, CTO (and back to software Engineering again because it's my preference) -- and now I can't even get an interview with a human.
Due to unfortunate life events my savings are now all but gone and I don't even know how if I will be able to keep a roof over our heads. It's messed up.
If anyone is hiring send me a message. I'm a .eu citizen but work have residency in and work out of Mexico.
There's also the matter of going back to school, and the associated debt I'd have to take. I'd never be able to pay the loans off if I did that.
If you are gunning for a remote job, that's not happening anymore expect for the top 5% of candidates.
If you are gunning for a job outside of a Tier 1 tech hub like the Bay, NYC, London, TLV, Beijing, Shanghai, Hangzhou, Singapore, BLR, HYD, etc you will have a hard time.
If you are not up-to-date with modern stacks and the capacities as well as limitations of AI/ML enhanced workflows, you will have a hard time.
Edit: can't reply
> Paul-Craft
Based on your profile below, I am surprised you aren't finding anything in the Bay. It's a hot market right now. Maybe get your resume reviewed?
> Most of the job openings for humans are remote and not in big tech
Absolutely agree about the "not in big tech" part, but remote being the majority of tech hiring is absolutely false in 2026.
> My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness
Go back to using a human written resume. An LLM generated resume is obvious and a negative signal (you could be a bot)
Also, make sure your resume is 1 page.
I'm tailoring my resume to individual postings a good portion of the time. My "default" resume is by ChatGPT; it's essentially my human-written resume, jazzed up a bit for ATS-friendliness. There are no hallucinations in it, and I feel it accurately represents my experience.
It happens to many, it's happened to me three times so far - the mods rate limit (only X comments per Y time period) people who have been flagged, judged, and found to be a bit prone to get in rapid back n forth exchanges that have crossed guidelines.
It can generally be reversed on request via hn email, sometimes it's a blessing, sometimes it's not even something that impacts a user very often unless they find themselves in an interesting exchange.
It's important to understand the world beyond your bubble. If those jobs seem unrealistic as an option, you may need to consider if your cost of living is unrealistic.
> You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
Asking for consent to what is more or less meeting transcription (already enabled, presumably) seems a little odd. If you don't like it, why not just talk to the coworker and ask them not to use it? Offer to take notes yourself, perhaps.
> A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
Why would that happen in the Slack channel? Presumably you'd be googling it or reading documentation to do this, not posting in the channel.
> An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
This is an insanely reckless thing to do with or without AI. If this actually happened at your company...I think there were deeper issues than overuse of AI.
> One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
Again, I think you should communicate with your coworkers on this. Possibly even bring it up in 1 on 1s with your manager. Not "I want to discourage use of AI" but "copying and pasting AI responses shows a lack of respect for others' time" and "lack of due diligence," show a horror story of an AI deleting someone's PROD database, etc. it's a useful but imperfect tool, not a replacement for thought.
I think in general, if it were cheaper to live, we would see a shift in priorities, what people focus on, etc. More art, less grift.
Genuinely good people get caught up in rat races trying to reach their ceiling while they can. If they didn't feel that pressure, maybe they'd be doing something else.
If we do consider the ethics, there's a lot of contradictions built into why someone would want to live there so badly to do the kind of work the blog post is concerned with.
Their efforts are better rewarded moving their passion into an open source project while keeping a job in tech that they don't care so much about and are qualified for. This is a normal part of growing up. Some people switch careers while others stay in it while decoupling their passions from their paycheck.
The act of software development formalizes paradigms, surfaces unknowns and forces their resolution. Traditionally the work product gets better over time as you iterate. My own coarse rule of thumb is on average it takes until version 3 or so - i.e. 3 rewrites - until you to land at the kind of high caliber product that stems from really understanding the problem space and having worked in it extensively enough to have a good mental model and have uncovered the edge cases and hammered out an optimal solution.
While AI is famous for fast iteration, I expect in cases where the designers wielding the tool lack a deep understanding of what's going on, potentially exacerbated by never actually having to work with the codebase, it may actually turn out to impede their ability to reach that plateau. Not saying this will be true for all use cases, just that the tool makes it seductively easy to fall into that trap.
I'd love to reinvent computing from the ground up, stripping away the many patchwork layers of complexity we've accreted over time and applying an obsession for making each individual component uncommonly robust and engineered for clarity. I feel that kind of project would be a great candidate for human-written code. I think AI tools would make a great sounding board / linter / reviewer in such a scenario, but since they were trained on existing examples and legacy patterns I'm not convinced they'd be as good as a human at the actual constructing, in terms of what I'm optimizing for.
I personally tend to favor longer lead times and slower public ship pace (but not slower betas or delay in customer feedback) in order to maintain a higher bar of quality. Even if saying so out loud risks branding me heretical by some corners of Silicon Valley!
I work at a very 'AI-pilled' company, but:
- Everyone reads and reviews every PR and leaves human comments
- Documentation is written well and tended to by humans
- There's no 'AI mandate'
- Whether features are possible are first explored by an agent but manually traced by a human through the codebase
You can treat AI like a very powerful tool to augment you and run your agent swarms at the same time.
Long breaks help. Take your mind off of things that bothered you. Do things you enjoy. Which may include tech work, but on your own terms.
I wouldn't be surprised if you decide to not go back. The status quo of most organizations is grim. But there are still people who care about the same things as you. You can seek them out and work together, much like you did 15 years ago. This is more difficult now among the noise, but you can tune that out. The industry will never recover altogether, but this current period is a blip of high insanity, which will subside in a few years.
Good luck!
This is what tech has always been. A never (yet) ending race to automate. Our job will be done when there's nothing left to automate.
Outsourcing your thinking, especially uncritically, is. There is a very obvious cognitive bias in the most vehement AI advocates where the one time a tool worked really well for them makes it worth the dozen of times it blows up in your face and makes that someone else's problem. The gain is romanticized and the losses set aside, without checking the balance or how badly the losses wear on morale.
Otherwise, if they decide to go into another field that they will be starting from scratch in will pay only a small fraction and whatever lifestyle they were used to will have to change.
Being cutoff from China " A market that is also advancing in the same sectors as the US.Not allowing competition to enter the west will cause a recipe for disaster in the future. The current government is not "focused" on growth, despite the contrary to what's being said publicly. Where this will take the US is a place were stagnation is okay, so to make up for it there is a surge in investments in AI craze at the moment. The feedback is required in order to grow that goes for companies too not just the junior-varsity wrestler at you local high school. I mean taking abundance of data to utilize a summarize tool so that it can auto complete a prompt was bound to happen sooner or later, take elastic search for example, it's a search bar that as you type shows what that database has to offer with either a weighted response or indexed response depending on setting. This tool also shows images and information in regards to the search query. All that was needed to happen in that scenarios was something to compute this mess of data in abundance and project a response from it not just a search result. Marvelous you might say, but it has been around for a while now.The idea was there, it just needed the actor to execute it. The firings alone tell you the health and implications of these actions taking place. There was promises behind these investments that this war is interrupting or severing the deals even post-conflict.
The DotCom bubble was push on society to use the web and to digitize some parts of our lives, which the few companies that survived DotCom era are whats driving the push to the next era of tech or digital. It seems the AI idea is born without a guardian nor ownership, but to leave the courage to act upon it is open to any takers. The overwhelming spillover of data had to go somewhere. The useless data " how fast does a 2001 Porsche 911 go?" was tiresome to search for anymore.
The education system is already fallen apart in the US and this only makes things worse. Where is education heading with all of the adoption of AI all around us, how will you argue with your children, how will you learn new things? I don;t think I'm the only one thinking this at the moment by all means. The solution? well I'm, not sure if there is a solution to this? Companies want to see results from their spending and they will not stop until that is evident.
optimism is clearer without fog.
What does Trump have to do with AI?
People caught up in this line of beliefs generally tend to be more neurotic and unhappy about most things.