HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
62% Positive
Analyzed from 6687 words in the discussion.
Trending Topics
#code#more#software#don#writing#meetings#coding#same#still#always

Discussion (127 Comments)Read Original on HackerNews
In any case, two things can be simultaneously true:
1. Writing code is not the bottleneck, as in we can develop features faster than they can be deployed. 2. It's annoying and disruptive to be interrupted when doing work that requires deep focus.
[1] https://en.wikipedia.org/wiki/Group_attribution_error
First, it becomes possible for people who have a double standard to hide behind this. One can try to track an individual's stance, but a lot of internet etiquette seems to be based on the idea of not looking up a person's history to see if they are being contradictory. (And while being hypocritical doesn't necessarily invalidate an argument, it can help to indicate when someone is arguing it bad faith and it is a waste of time as someone will simply use different axioms to reach otherwise contradictory conclusions when they favor each.)
Second, I think there is the ability to call out a group as being hypocritical, even when there are two sub groups. That one group supports A generally and another group supports B generally (and assuming that A + B is hypocritical), but they stop supporting it when it would bring them into conflict indicates a level of acceptance by the change in behavior. Each individual is too hard to measure this (maybe they are tired today, or distracted, or didn't even see it), but as a group, we can still measure the overall direction.
So if a website ends up being very vocally in support of two contradictory positions, I think there is still a valid argument to be made about contradicting opinions, and the goomba fallacy is itself a fallacy.
Edit: Removed example, might be too distracting to bring up an otherwise off topic issue as an example.
Steering a LLM also requires deep focus. Unless you want to end up on accidentally quadratic or have a CVE named after your project.
Meetings that increases sync between customer and coder are few and precious.
In large organisations ceremonial meetings proliferate for the wrong reasons. People like to insert themselves in the process between customer and coder to appear relevant.
I personally am fond of meetings with customers, end-users, UX designers, and actual stakeholders.
I loathe meetings with corporate busybodies who consume bandwidth for corporate clout.
No, I don’t need another middle manager to interface themselves between me and my users.
Why am I awake at 1:00am, ruining my brain and body, trying to get this feature finished before the end of the week instead of three days later? Ah yes, so that we meet our quarterly OKR, and the next quarter's plan that the EM and PM negotiated without me or our customers isn't disrupted and doesn't need adjustment. That would invite reprimand from the director, and the extra work would be terrible for them, I understand.
I'm reminded of this recent thread in which Heroku left the devs in charge and suddenly features that the author had requested for years got implemented: https://news.ycombinator.com/item?id=47669749
I am a former Dev turned PO/PM and now CEO, I can tell you many a developers are not fond of those meetings you are fond of and people like myself don't insert our selves where we don't belong, we simply join the meeting and have the vital conversation with the customers/stakeholders whos payments make payroll possible, while the developers refused to.
My team have always commented and liked that I "shielded" them from the none technical meetings and distilled customer needs in our kanban, without them having to go to the meeting. While I agree this isn't the "best way" to do things, I simply have never seen a Dev Team work as the way HN tries to make the role sound "Dev/Eng and the customer is the only thing needed". Would love for this to be the case!
Also for those who think I'm down talking the abilities of my team, we made a company together when we left a huge company we worked for, as Co owners and even now we use same setup is used :)
Truth. I'm that person and didn't appreciate how rare I was until I became an EM and learned that most of my team would actively avoid conversations with the customer. Even though I have no way to quantify it, I'm sure it's benefitted my career.
This matches perfectly my experience in working in many companies, where in most of them meetings were useless, but in a few places meetings were very useful, depending on how the companies were organized and how the attendance to meetings was selected.
I have seen projects that had to be abandoned without bringing any money, despite being executed perfectly according to the specifications. The reason was that the specifications were wrong because the customers have not thought about describing some requirements and the developers could not ask about those, because of lack of direct communication, while the middle men had no idea about both things, about what the customers might require and about what the developers might need to know.
How is it hypocritical?
If in the old world, the very important process that used up a lot of time and benefited greatly from no distractions was the actual writing of code then interruptions for various ceremonies with limited value other than generating progress reports for some higher ups would feel like a waste of time.
That same person in the 'new' world where writing code is very fast but understanding the business and technical requirements that need to be accomplished is the difficult part would then prioritize those ceremonies more and be ok with distractions while their AI agents are writing the code for them.
It's not hypocritical to change your opinion when the facts of the situation have changed.
I’ve noticed this push to try to clothe hypocrisy in made up virtues like intellectual curiosity and mental plasticity a lot lately. All I can think is that it’s some kind of ego satisfaction play people make when their place in the world is threatened.
How to do it? Focus on writing code.
New value: Producing high value software.
How to do it? Focus on writing specs for code / identifying needs.
I expect there are a lot of hypocrites in the mix, scared for their job. But this isn't a fundamentally hypocritical position - agents are changing the game for how software gets produced and the things that were important as recently as a year ago might reasonably be said to be irrelevant now. Ironically, we might yet see a great software engineer who has never written a program in their entire life. The odds are slim but it is possible now.
You can't be a dick on this platform without fancy prose I guess.
There is a reason (well, many reasons) that, if I'm working on a creative project with somebody outside a company, we would never think of reaching for Scrum ceremonies or Jira.
It is more than perfectly consistent to complain about that while valuing collaboration.
I'm seeing both these beliefs right now:
• Belief A: "I am a skilled professional whose value lies in my unique ability to solve complex problems."
• Fact B: "An LLM can now solve many of these problems in seconds for pennies."
This thread is great at showing how people are rationalizing by moving the goal posts, so to say
The problem rather is: often good programmers have quite good ideas how these problems could be solved, but for "organizational politics" reasons they are not allowed to apply these solutions.
Thus:
Concerning (B): Because they are not allowed to apply their improvement ideas, they are the bottleneck. But being the bottleneck is not the root problem, but rather a consequence of not being allowed to improve things.
Concerning (A): It is indeed often the case that if you simply let someone else do the work, the code quality decreases a lot and in subtle ways. Good programmers are very sensitive (and sometimes vocal) with respect to that - in opposite to managers.
No developer was ever unhappy to communicate. But when pointless communication occupies too many long hours, interrupting useful the progress of understanding what could and should be done (by coding, yes, experimenting, getting a grasp of the beast), then yes they became unsympathetic.
In fairness, given the context those meetings give, it stands to reason that giving that same context to an AI, it can, in theory, still do the same thing as an engineer. But those meetings still need to be had.
Who?
There are millions of software engineers around the world. It's quite likely that they have a few different opinions and point of views!
>the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity
Seems pretty clear to me.
It's an inherent tension that every discipline has to wrestle with. The most experienced developers are in the best position to evaluate where LLMs are, but those who are the loudest about their own abilities generally aren't in this camp. Humility tends to come with experience, and arrogance tends to come with inexperience.
My opinion since college (8y ago) was that the best engineers are the ones who treat everything as halfway a people problem, even in low level code.
If the "goalposts" represent what people generally think LLMs are capable of, they should be moving, right?
And complex, multi-part, long term efforts like building software and software companies always have numerous obstacles. When one is cleared, you wouldn't expect there to be no more, would you?
Your tone is complaining, but I just see people working in reality.
That's life.
Life changes and us along with it.
"Who Moved My Cheese?"
Unless you sign off on a Looks Good to Me PR and go loiter by the kombucha machine. Then you have other problems.
[With that said, the specific implementations of such collaboration are often still very painful and counterproductive...]
Having "house rules" on a team that new members must agree to follow tends to flush such people out and they usually exit on their own when their shenanigans get repeatedly called out as violative. Gotta introduce the rules in the interview process and get agreement after they join. Catching them out early is the key.
We had an intervention on one hard case and he rage quit the next day. I don't know why people do that, it's a small world and people talk.
But I have also worked with some who refused to participate in collaboration, they felt their time and ideas superior to others, and there's no excuse for that.
Personally I find it hilarious that the same people at my company who can't be bothered to write down detailed requirements and are constantly fighting any effort to do research or technical documentation or pay down tech debt are now trying vibe coding and struggling to produce anything useful. Oh you don't understand why you aren't getting the results you expected? Maybe you should try thinking deeper about what you expect before your rush your engineers or, now, your agents.
I am genuinely curious. I understand where you are coming from, you want to maintain flow state.
How does one effectively load the funnel to support flow state ?
Jira tickets? Requirements documents in some kind of ALM tool?
The focus is still the code.
The contradictions you see could mostly be variations across individuals rather than hypocrisy within individuals.
(Doubly so for vaguely defined groups, like "kind of engineer".)
It's precisely because I get swamped with all the non-coding work that agentic coding works so well. And in multiple ways.
- it lets you get back in the flow faster (unless you were used to writing out your inner thinking monologues and reasoning to get yourself back to speed when you come back from a meeting).
- it lets you move faster and take on more on your own, meaning less people needed in the team, less communication/syncing/non-coding overhead.
If you're objective about it, AI coding is going to be amazing for individual productivity. It's probably going to fuck us (developers) over with the reduced demand, lower bargaining power, etc. But just on technical merits it's a great productivity tool.
The models are still not better than me at coding and handholding is required, but the speedups are undeniable, and we're long past the threshold of usefulness. So far all the contrarian takes are either shallow/reflexive pushback because people don't like the consequences, or people working in niche stuff where LLMs are not that great yet. But that has been shrinking with almost every release - in my experience.
I know everyone here writes cutting edge algorithms that were never encountered in the training data, their code is hyper optimized realtime bare metal logic that's used in life or death scenarios and LLMs are useless to them - but most of the stuff I do day to day is solve problems that have been solved before, in a slightly different context. LLMs are pretty good at that.
NONE of the activities you mentioned are activities that lead to what article talks about - well designed spec.
Similarly, the amount of open source people who previously maintained a hardliner programming meritocracy stance and now pivoted to AI and market AI is exclusively limited to those whose companies are working on AI products. The good ones in that space are decidedly less than 1% of all good ones.
They are not the same people.
> It's hilarious ... their most essential and sacred activity ... suddenly, and with no hint of shame ... the nakedly hypocritical attitude ... still extraordinary
Calm down the hyperventilating for two seconds, look around, and you’ll immediately see examples of the same group of people who now biTch aNd mOaN about how agentic coding is killing what they love about programming.
It’s interesting to see people either gloat or get incensed at the nerds who like computers in the context of these developments.
half the time you’re going to discover the right decision / path while you’re coding.
focus time went from hammering code to figuring out how to solve the problem. PRs are now how we exchange ideas. meetings are still productivity theater.
Also, expect harsh and rude reactions when pointing to big issues that are crystal clear in the middle of the village. Not all truths are warmly welcomed, especially when looking elsewhere feels more comfortable in the immediate experience.
Take care and don’t worry too much: the journey’s short, so remember to also enjoy the good parts.
Agreed, and I also agree that most developers come to this realization with time and experience. When you have a clear understanding of business rationale, scope, inputs, and desired outputs, the data models, system design and the code fall out almost naturally. Or at least are much more obvious.
> Jevons Paradox: when something gets cheaper, you tend to use more of it, not less.
That's a butchering of Jevons paradox. What's stated is not a paradox, but a very natural effect. Obviously usage of something goes up when it gets cheaper.
What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.
The paradox would be:
I don't think amount of software is what determines whether a company does well.
I don't think capturing quantity of context is that important either.
Now, quality of context. How well do the humans reason?
Then, attitude. How well do the humans respond to bad situations?
Then, resource management. How well does the company treat people and money?
Finally, luck. How much of the uncontrollables are in our favor?
Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
The bottleneck for making software applications better at being used by (non-software) businesses is making sure the software does all the software things that actually benefit the business. Save time. Make humans more productive. Reduce human error. Make the business more efficient. Increase profit margins.
All of those things are a bit difficult to predict and quantify. You start with ideas of what might help the business, you maybe design, prototype, trial. Ultimately you build or enhance software applications, and try to measure how well they're making the business better.
In all of this, making sure software is addressing the right problem in the right way, and ultimately making the business better - that's a hard problem! Regardless of how fast and easy it is to make software.
But yes, the speed can really help. You can prototype and trial and improve the feedback loop.
So here we walk around the circle one more time again, voicing our anxieties, talking past each other, waiting for the next opportunity for commentary to come in half an hour.
It goes without saying that agents have little to no product sense in any discipline. If you're building a game or an app or a business, your creative input still matters heavily! And the same is true for code; if the software is your product, then absolutely the context missed by skipping the writing process will degrade your output.
That doesn't mean that writing code wasn't a bottleneck even for creating well structured software projects. Being able to try multiple approaches (which would have previously been prohibitively expensive) can in many instances provide something a room of bickering humans never would have reached.
Care to elaborate? I don't understand the difference unless you mean code that _is_ the product, being OSS code or code for license.
If you're writing OSS code or software projects expected to be used by others that may have constraints like that, then by all means the code that gets output matters itself. But even still I'd argue that the cost of writing code manually to get there is still a bottleneck.
But when you factor in today's favorite business model of "make it shitty", perhaps this matters very little.
So, the product vs everything that is needed on the way, but isn’t the core.
CI/CD tooling, template population…. Things you write a use once/use few script for.
I typically end up with a library of tools to deal with repetitive finicky tasks.
It is the same as putting an Einstein paper on a photocopier and call the process "writing a paper".
I agree with the point of the article though: code generation does not really work, the results are bloated and often wrong and people already had more features that they could absorb in 2020.
The solution to this mess is to have 18 year olds boycott studying computer science altogether, since the industry (and mediocre fellow "engineers") will treat them like human garbage.
Agentic tools are "burglary tools" -> Younger folks should not study CS?
I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.
The flashing red dot on the web page is very annoying. Is there some design reason for that?
edit: I meant the <svg> inside `trail-map-container`
I don't think this sentence speaks for me. This is the sort of thing I love to do.
The error in the reasoning is that while you can increase your resourcing to tenfold and gain nothing in return, the inverse is not necessarily true.
I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.
That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.
The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.
That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.
Never experienced this at a job in 30+ years, and that includes my first jobs in fast food. If you experience this at work, find another job. This isn't normal. It's extremely dysfunctional in fact.
Thing is, this job market is hell. There are folks who have to choose between the abuse or making rent, which is why we need stronger incentives for organizations to discipline said abuse rather than let it permeate because existing penalties lack teeth.
people, are part of a team focused on a goal, they work together because they believe in that the ship is worth riding on and will reach its destination,
the ship should carry food people want,
team decides what food will be consumed,
captain tries first the food,
if food is good and people want it, people buy more
Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.
In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.
> Not just “this module exists,” but “this module is weird because the migration had to preserve old behavior,” or “this benchmark matters because a previous optimization silently changed the distribution.”
The thesis here is that an LLM will document code better than a human (although based on human artifacts), since churning through huge quantities of text is what they are good at.
A few thoughts:
1) Yes, an LLM may be able to pull comments out of commits and PR comments and put them back in the code where they belong, but I question how often a developer too lazy to put a vital comment in the code would put it in a commit message instead!
2) "The truth is in the code" has always been true, and will always remain true. If the comments differ from the code, the code defines the truth. Pulling comments from stale external documentation and putting them in the code does more harm than good.
3) Comments that can be auto-generated from the code don't add much value (lda #1; add one to the accumulator).
4) Comments about the purpose or motivation of the code, distinct from 3), such as the "we had to preserve backwards compatibility" example, or "this code does this non-obvious tricky thing because ...", are where the value is, but the LLM is highly unlikely to be able to discern any unwritten motivation by itself. If the human developer left a comment somewhere then great (assuming it is still relevant)
Most of the discussion we see about LLM coding is how fast it can churn out thousands of LOC on a greenfield project, or how good they can be at finding bugs, but neither of these are very relevant to the main job of developers which is maintaining and extending existing codebases. It would be lovely if most projects were greenfield, but they are not.
In any large project that has been maintained over a few years or more, there will inevitably be an ever growing accumulation of bug fixes and patches for specific issues that have been discovered in production, likely poorly documented and out of sync with any original documentation that may have existed (which anyway tends to be more idealistic and architectural in nature, not capturing these types of post-deployment detail and special cases).
The natural tendency of an LLM is to want to rewrite code to match the statistics of what it was trained on, and they need to be reigned in via prompting to resist this and not touch more code than is minimally needed for what is being asked. Of course asking an LLM to do something is a bit like asking a dog to do something - sometimes it will, and sometimes it won't. I expect over the next few years we'll be experiencing, and reading about, more and more cases where LLMs have introduced bugs and regressions into mature code bases because of this - rewriting code that should have been left alone. The general rule is that if you are tempted to rewrite something you better first understand why it was there. coded the way it is, in the first place.
I can't help but compare the current state of "AI" (LLMs) to the early days of things like computer speech recognition or language translation when they were considered amazing, and everyone was gushing about them, but at the end of the day the accuracy still wasn't good enough to make them very useful - that would take another 10-20 years.
Another historical lesson/perspective would be expert systems which at the time were considered as AI and the future of machine intelligence (the Japanese "5th generation systems" were going to take over the world, CYC promised to offer human level intelligence), but in retrospect were far less important. It won't be until we move on from LLMs to something more brain-like, deserving to be called AGI, that LLMs will be put in their historical perspective.
At the moment DeepMind seems to be the only one of the big labs admitting/recognizing that scaling LLMs isn't going to achieve AGI and that "a few more transformer-level breakthroughs" are needed. Hassabis has however talked about LLMs (GPTs) still being a part of what they are envisaging, which one could either regard as a pragmatic stepping stone to real AGI, or perhaps that they are not being ambitious enough - building something that still needs to be spoon-fed language rather than being capable of learning it from scratch.