Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
39% Positive
Analyzed from 5828 words in the discussion.
Trending Topics
#code#debt#more#cognitive#don#software#things#agent#using#teams
Discussion Sentiment
Analyzed from 5828 words in the discussion.
Trending Topics
Discussion (108 Comments)Read Original on HackerNews
For things that are appropriate to build with agents, I have come to hold the strong opinion that you need to go all-in. If you built it with an agent, then you fix it with an agent, you debug it with an agent, and you change it with an agent.
In that case you should not consider yourself the steward of the source code and worry about “cognitive debt”- it’s literally not your job anymore. Your job is the keeper of the specification and care and feeding of the agents.
If you adopt the mindset that “I’m not going to build the documentation for me, I’m going to build it for the agent”, and “I’m not going to try to use my development skills to debug something I didn’t write, I’m going to make specific interfaces for the agent to understand the state and activity of the running code”, etc.- you’ll be a lot happier and more successful.
If you are using agents for autocomplete in your editor, or you open a separate chat window to ask a question about your code- that’s a very low level of agent usage and all your existing dev skills and responsibilities still apply.
If you’re using a planning framework like superpowers (the skill) and just laying out the spec for the program, then keep your fingers out of the source code, and don’t waste your time reading it. Have the agent explain it, showing you in the IDE, and make the agent make any changes you want.
You can inject philosophy into the agent and ensure that it sticks to it. The LLM will, with sufficient drilling, begrudgingly implement it, most important of which is SIMPLE>COMPLEX on all levels and you have to either manually or agentically continuously monitor this.
Alternatively, LLM will use its tiny context window to build a true spaghetti that even itself cannot fix any more. This is the default path, and the path that way too many has taken.
That said, it still frequently introduces subtle bugs, so I have to review every change carefully.
The real trick is learning when to use it. Some tasks are much faster to do myself, while others are faster with Claude Code.
Of course, we have had compilers and tooling, but those are the pencil and drafting board of the draftsperson. An ecosystem of packages, dependencies and APIs has evolved, but those are often just spells the software magician invokes after reading the spellbook^H^H^H^H^H^H^H^H^H stackoverflow^H^H^H^H^H^H^H^H^H^H^H^H^H API documentation.
We are going to need to build a new set of boundaries and abstractions with new handover protocols to manage this mess.
AND inspecting the actual production line. Even Apple audits Foxconn (the most successful and reliable assembly line humans ever built) onsite annually.
Plus, 'agile' in quite a lot companies is really waterfall that's been broken into sprints without the planning of proper waterfall or the discovery and learning of real Agile. The software still gets built though. Maybe software is actually quite easy to plan.
It's the people that claim to "do agile" that invariably don't do it. But software development used to fail most of the time, and it doesn't do that anymore.
You would need the agent to be a processing step owned by someone responsible. But the intelligent part of an agent is what makes it viable as something more than a processing step. What to do with an intelligence that can't take responsibility? Not answering this question leads to cognitive debt.
So much of what makes high-functioning teams work is a sense of ownership and stewardship, and what makes low-functioning teams break is a lack thereof. Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.
In the past, that ownership could be individual or collective, but with AI and a lot more lane-crossing, ownership should tend toward smaller groups (or individuals).
A developer can design, but a designer needs to review it. A designer can code, but the owner of the code must review it.
This might feel like gatekeeping, but it's the only way.
Wait...
At one business I was a part of where that experiment was tried, it failed badly. In reality, people were being switched around on projects and the "owner" was changing every few months. The end result was quite messy, both in terms of technical debt and politics(about who's the final decision maker).
I've said this before, but people gloss over this fact.
>Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.
I've also said this before, but AI-glazers just respond with "I think we may just have to let go of pride & kudos and their connection to our identity."
Most people who vibecode don't give a shit about their work. Any solution is a solution as long as it works.
>This might feel like gatekeeping, but it's the only way.
Gatekeeping is not inherently bad. We want gatekeeping.
If I'm getting surgery, I want an actual doctor with proven credentials to do it.
And to anyone claiming that software doesn't kill, please look up "Therac-25" or the 65 people that died due to Tesla's "Full Self-Driving".
And that’s okay! Much like it’s okay to let other people write the code.
What is important is that the code written by Agentic AI is covered by automated tests adequately, and that you verify that the architectural plan is solid. But then this is also what you do with your colleagues’/juniors’ code.
Now we all know horrible mangers who didn't keep up to date nor used their thinking. This will happen with AI useage too. What is more we are expecting people who are engineers to have a manager's mindset (by managing AI agents, products requirements, etc). Many engineers are horrible at this and have no desires or ability to become a manager. This is why they went to engineering in the first place.
Bingo. If I wanted to spend my life managing incompetent sycophants, I would've studied for an MBA to try to rise the ranks at McKinsey.
The funny part is that these are the same people who are upset that these folks up the food chain "do nothing".
So no wonder people aren't happy
That’s the neat trick kiddo, they won’t. Across the industry, the messaging is clear: use AI and be more productive. Management is salivating at the idea of getting rid of people and keeping a higher share of profits for themselves. Most ICs I talk to are increasingly expressing the feeling of burnout, fear of losing jobs and resentment that AI is being pushed the way it is being pushed. I have more than a few conversations where people have clearly expressed that they are mostly focused on keeping their jobs. They don’t care about cognitive debt and some are looking forward to the time when the debt comes due.
It is depressing, but it is the reality.
Across the board, I still see people loving to over design things that can be much simpler. This isn't much changed because of LLM, LLM just allowed them to create the complicated implementations much faster.
In terms of over engineering, I wouldn't be surprised if the human tendency for skeuomorphism (combined with an loss of technical skill) will create even weirder code.
Enshitification in this area will be shift. And there will be grand articles here on HN “nobody could possibly have seen this coming.” Yes we did.
Which means the stuff that replace it will also happen faster.
Overall, the quality of the software is likely similar, since AI do not have purpose, and software still largely reflect human will and thinking.
The vendor was basically right at the end of the "fun" part of cranking out features, and just about to hit the "rubber meets the road" part where you start fixing bugs, finding new edge cases, discovering new hidden requirements, and realizing X% of your design assumptions were completely wrong. Oh yeah, and minor little mop-up tasks that don't wow the client, like integrating with a payment processor, integrating with our internal scheduling system, exporting invitee lists from our CRM into our app, etc.
It's possible we're in a similar cognitive debt situation to having to maintain a large, swiftly-AI-coded app. After about 6 months of stressful development, which started with what I call throwing dye in the water and eventually progressed to understanding one small feature or flow at a time, we have maybe 50% of the mental model we'd have if we'd built the app ourselves. Whole chunks of the app are still a black box to us.
It doesn't help that requirements have evolved so much since the original documentation that it's worse than useless because we can't trust it. So the code, which we don't understand, is the only documentation of the current requirements.
Of course, our internal clients are pissed because the final product is taking so much longer than expected, when they could see all these awesome shiny, happy-path, 80%-done features 6 months ago. We're in a constant fire drill. Everyone on the project is miserable. It's the least fun kind of development.
I think it's great for writing tests and sanity checking changes but wouldn't let it write core driver code(I'm a systems programmer so YMMV). Maybe in a month I'll think differently.
When all you've got is a hammer...
...you'll eventually stop knowing how to use other tools.
My primary editor is vim, and for a significant amount of time I was using it almost in puritan fashion, this was before LLM was mainstream.
However, I could not use vim to edit java, even with language server - I tried, but each time I went back to intellij - the rest of the code base in python, ruby and typescript was typically fine.
The reason was two fold, because everyone was using all of the features that intellij had to offer, the code was structured similar to intellij and obviously the java design patterns that was popular at the time. Everything went through factories and managers and interfaces and tracking them through a pure editor was almost impossible. The IDE handled it for you.
But everything else? Things I or others had to build from ground up was built with this cognitive limitation in mind, which means I can fit everything nicely and edit with vim, even without a language server with high efficiency.
Those cognitive limitation is good for the software. It's easy to explain, easy to debug, easy to add and subtract. And I've come to disregard the intellij way, or the current vibe coding till it works that is common everywhere now. The principle is KISS - keep it simple stupid. If AI will not do that, then you have to. It is a simple philosophical question that is more important than ever. And sadly most people still don't realize it - they will happily tack on the next "feature" in with the scaling they didn't need at that time with the design pattern that they don't need at the time and prematurely optimize themselves into cognitive and technical bankruptcy.
In that situation, coming in cold to a library that you haven't worked on before to make a change is the normal case, not "cognitive debt."
If you have common coding standards that all your libraries abide by, then it's much easier to dive into a new one.
Also, being able to ask an AI questions about an unfamiliar library might actually help?
Smaller teams have more agency to move and usually team members with broader responsibility and understanding of the systems. Also possibly closer to stakeholders, so are already involved in specification creation and know where automation can add value. Add an AI agent and they can pick and choose where they can be most effective at a system level.
Bigger teams have clear boundaries that stop agency - blockers due to cross team dependencies, potentially no idea what stakeholders want, just piecemeal incremental change of a bigger system specified by someone else. If all they can do is automate that limited scope it's really just like faster typing.
Not every company is going to see those boundaries and stakeholders as features, and they'll be under pressure to "mitigate those blockers to execution". That's where the cognitive debt skyrockets.
But... as team size grows, LLMs can be more valuable in other ways. Larger teams typically have larger codebases to comprehend, more users, more bug reports to triage, etc. It's SO much easier to get up to speed on a big existing codebase now.
Large teams prioritize service resilience and depth of coverage.
The ability to generate code has seemingly transposed what people think of as a "high-performing team" from one that produces quality to one that produces quantity. With the short term gains obviously increasing long term technical debt.
The software is necessarily complex due to legislative requirements, and the corpus of documentation the AI has access to just doesn't seem to capture the complexities and subtleties of the system and its related platforms.
I can churn out ACs quicker, but if I just move on to the next thing as if they're 'done' then quality is going to decline sharply. I'm currently entirely re-writing the first set of ACs it generated because the base premise was off.
This is both a prompt engineering and an availability-of-enough-context documentation problem, but both of those have fairly long learning curve work. Not many places do knowledge management very well, and so the requisite base information just may not be complete enough, and one missing 'patch' can very much change a lot of contexts.
I did a live demo in front of the CPAs, using their documentation, and Claude asked clarification questions they hadn't thought of and exposed gaps in the old manual processes.
Ever since LLMs started writing decent code, I started feeling like a part of that joy of code-writing has been taken away.
Using LLMs literally leaves a developer to do (what I find is) the worst part about software development: debugging someone else’s code.
Besides this, everything feels rushed. I am under the impression that I can’t “take my time” to think about a problem anymore. It almost feels wasteful now. I have to “just do it”.
It makes me nostalgic and I feel like I’ve lost something about coding that made me enjoy it.
But it is the reality we live in and I’m adapting to it. What I’m wondering is whether I should adapt or, rather, push back.
That being said: this feels a little like it was written using AI.
I know older devs that reminisce for the days of programming straight to the metal in assembly (e.g. on DOS or Amiga) and “knowing exactly what the computer is doing” which feels somehow familiar!
Even more familiar are senior devs moving to management (I know this isnt an original metaphor).
This exactly, has been the problem in cultures that have produced broken, lower quality things in general. Don't think deeply about the problem and don't think about the long term consequences. Just grab whatever solution gives some immediate solution the fastest. "Jugaar."
Many people are slipping into this culture now with the new pressure for immediate production pushed by the AI crowd. It's "jugaar." It's trading short term gain for long term breakage, chaos, and pain. It's also social and economic pressure to not do things properly.
Those who want to take the time time to really understand things, or to build things correctly are mocked or punished for being slow and simple minded. "Just do it this way, look everyone is doing it and making more money faster!" This is also part of that culture that drags everyone into jugaar.
The SWEs that go all-in on AI will never understand this, because they have never enjoyed the joy of code-writing. I would even go as far as saying that many of them even hate it.
Of this group, I think the majority are the same people that have joined the industry not because of an innate love for engineering, but because they saw an opportunity to make big bucks in big tech.
Am I the only one that is finding quite the opposite? I feel like a kid again, back when I had no responsibilities and infinite time to play around and build things. Being able to look at my existing tooling and say "there's a rough edge here" and then whip out the equivalent of a Milwaukee Bandfile [1] and smooth it out is making it fun to go to work again.
[1] https://www.milwaukeetool.com/products/details/m12-fuel-1-2-...
That just sounds like everyone is going to be management. Blindly setting goals and demanding features of a black box, formerly the development team, soon to be 'AI' agents.
Carmack once wrote something I’ve been holding dear ever since, and I’m paraphrasing, ”even if you copy paste code, make sure you write it”. And it actually works, the outcome of just having your brain make your fingers type the code is easily differentiated from just pasting it.
"the question becomes how teams will manage cognitive debt" the question is why it is allowed to occur when it is avoidable. Farcical nonsense. Write the code yourself or be silent.
> Cognitive Debt, Like Technical Debt, Must Be Repaid
In quite a few circumstances, cognitive debt doesn't entirely need to be repaid. I personally found with multiple projects that certain directions aren't the one I want to go in. But I only found it out after fully fleshing it out with Claude Code and then by using my own app realizing that certain things that I thought would work, they don't.
For example, I created library.aliceindataland.com (a narrative driven SQL course). After a while, I noticed that the grading scheme was off and it needed to be rewritten. The same goes for how I wanted to implement the cheatsheet, or lessons not following the standard format. Of course, I need to understand the new code but I don't need to understand the old code.
With other small forms of code, I just don't really need to know how things work because it's that simple. For example, every 5 minutes I track to which wifi network I'm connected with. It's mostly useful to simply know whether I went to the office that day or not. A python script retrieves the data and when I look at it, I can recognize that it's correct. But doing it this way is sure a lot faster than active recall.
At work, I've had similar things. At my previous job I created SEO and SEA tools for marketing experts. So I remember creating this whole app that gave experts insights into SEO things that Ahrefs and similar sites don't, as it was tailored to the data of the company I worked at. The feedback I basically got was: the data is great, the insights are necessary, but the way the app works is unusuable for us. I was a bit perplexed as I personally didn't find it that complicated. But I also know that I'm not the one using it. Then I created a second version and that was way more usable. The second version assumed a completely different front-end app and front-end app architecture though. All the cognitive debt of V1? No payback needed.
The reason that this is the case, as it seems to me, fall under a few categories:
1. Experimenting with technologies. If you have certain assumptions about how a technology works but it turns out you're wrong, or you learn through the process that an adjacent technology works way better, then you need to redo it. Back when coding by hand was such a thing, I had this with a collaborative drawing project called Doodledocs (2019). I didn't know if browsers supported pressure sensitivity and to what extent it was easy to implement. It required a few programming experiments.
2. It's a small and simple script, not much more to it.
3. Experimenting with usability. A lot of the time, we don't know how usable our app is. In my experience, this seems to be either because (1) it's a hobby project or (2) the UX people have been fired years ago. In these cases, more often than not, UX becomes an afterthought. But with LLMs, delivering a 95% fully working version is usually done within a week for a greenfield project. This 95% fully working version is an amazing high fidelity interaction prototype (95% no less). Once you do that for a few iterations, you then understand what you really need. Once you understand what you really need, then you can start repaying the cognitive debt.
I've found it's usually category 3, sometimes 2 and rarely 1.
I don't agree that demand for software guys will drop.
What I think is, demand for software people will go up while wages will be suppressed. And more software will be in the market as a whole.
There are so many craftsmen in market who hardly make liveable wage, select few make a bank! Same pattern will repeat in software.
Mass market software with large-scale adoption will drop. And specialised tools and services will take its place.
Which means 1000s of calorie trackers, 1000s of image editors, etc... but as scale will drop, income and revenue of companies will also drop.
Software wages are an anomaly in select countries; I always believe software wages shouldn't be more than a plumber or mechanic.
Wages in the trades have gone up a lot recently, at least where I'm from. Decades of parents telling their kids the trades are for losers lowering the supply of capable craftsmen...
And not all software will work as specialized tooling.
Calorie tracker apps? Sure.
Operating system kernels? Each with their own schedulers and allocators and ABIs and syscalls? Definitely not.
"Technical" and "cognitive" debt aren't really distinct phenomena; the spirit of the original definition of "technical debt" was that it WAS the delta between the system-as-it-is, and the human understanding of how best to solve whatever problem the system was intended to solve [1].
If we accept collapsing them back down to one term, then "managing cognitive debt" is the same thing as "managing technical debt": work to match the system to the human understanding of the problem the system is meant to address. The article calls out "emerging" techniques to do just this:
- More rigorous review practices
- Writing tests that capture intent
- Updating design documents continuously
- Treating prototypes as disposable
To me these are not "emerging," but rather "well-known industry best practices." Though maybe they're not that well known in fact? [EDIT TO ADD] On the other hand, it would make sense that they ARE well known, and that teams therefore reach for these familiar techniques to try and solve this "new" problem.
Putting in my 2c for the closing questions/thoughts in the article:
> How will they shape socio-technical practices and tools to externalize intent and sustain shared understanding?
Honestly? We'll probably end up doing these things more or less the same ways we always have. AI has not actually changed anything fundamental about how an individual encounters the world; there always was, and always will be, and always will have been, WAY more going on that we can fully get our heads around, but it's also always been the case that we can partially get our heads around most any problem space
> How will they use Generative and Agentic AI not only to accelerate code production, but to maintain their collective theory?
I suspect the answer to this one might well be that high-performing teams will have to scrupulously AVOID "accelerating code production" using AI in order to make sure what they are creating actually composes into the system they think they're building. If human understanding is the bottleneck, then the humans will have to produce less crap they need to understand!
[1]: https://wiki.c2.com/?WardExplainsDebtMetaphor, particularly the "Burden" and "Agility" sections.
AI is making people afraid as they run into these things. It's a little sad that they don't have the historical context or perspective to realize these are old problems. I imagine this is what Samurai felt like as flintlock guns came in and completely upended hundreds of years of martial tradition. How will they be able to defend themselves if they don't learn Kenjutsu? What will happen to our Bushido?
And I do think the fear is warranted. But I don't think people are going to act differently once they realize this unfortunate status quo hasn't yet led to the collapse of civilization, or their paychecks. Once the fear has passed, we will move on into the new normal, willfully ignorant and mildly disappointed.
More code is not better. more code, more quickly is worse. Don't delude yourself into thinking you are more productive, you are just digging a deeper hole.
The LLMs have been trained on soulless corporate speak.
So the logical next step is to focus on Biological Immortality and short of that Digital Immortality. God speed everyone.
Lack of documentation, failed onboarding, poor architectural understanding, missing tests, review fatigue. if all of these are simply grouped together as “cognitive debt,” isn’t that just a failure to build a proper workflow?
The scope is too broad. It reminds me of Stepanov, the creator of the STL, saying that if everything is an object, then nothing is.
When an abstraction tries to cover too many things, that abstraction inevitably fails.
The way AI specifically amplifies this problem is through the difference between direct work and indirect work. The core issue is that “it works” can easily create the illusion that “I understand it.”
Another thing I felt while reading this essay is that it almost seems to go against the direction of modern software engineering. Once software grows beyond a certain size, it is already impossible for anyone except perhaps the original designer to understand the entire system. The goal is not for everyone to understand everything.
The real goal is to make local changes safely, and to ensure that the system keeps running without major disruption when one replaceable part — including a person — leaves.
At this point, many things being described in the industry as “cognitive debt' look to me like rhetorical tools for selling essays.
Reading this, I even wondered: if I write about trendy terms like cognitive debt or spec-driven development on my own blog, will people pay more attention?
To be honest, spec driven development has a similar issue. When you go from a specification down into implementation, information loss is inevitable. LLMs cannot fully solve that. In the end, a human supervisor still has to iterate several times and tune the result precisely. The real question should be: how far down should the specification go? In other words, at what local scope does it become faster for a human programmer to modify the code directly than to keep steering the AI-generated code?
But that discussion is often missing.
As people sometimes say, “when you start talking about Agile, it stops being agile.” In the same way, I think the “cognitive debt” frame may be a flawed abstraction of the current phenomenon.
The moment a living practice is nominalized, packaged, and turned into a consulting product, it loses its original dynamism and context-dependence, becoming a dead template.
It puts various discomforts that emerged after AI adoption — review burden, lack of understanding, fatigue — into a single box.
Then it attaches the economic metaphor of “debt” to emphasize the seriousness of the problem, and subtly injects the normative idea that “this must eventually be repaid.”
Thinking back to Parnas’s 1972 work on information hiding, software engineering was built on the principle that local understanding should be sufficient, and global understanding is not the goal.
The cognitive debt framing seems to implicitly reverse that principle by treating “shared understanding” as something that must be preserved as a global unit. I do not understand why the discussion keeps moving toward the idea that everything must be understood.
It reminds me of Bjarne’s onion metaphor for abstraction: if an abstraction works, you do not necessarily need to peel it apart without reason.
My main issue with the current cognitive debt framing is that the layer it tries to cover is too broad.