HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
50% Positive
Analyzed from 655 words in the discussion.
Trending Topics
#code#same#without#utility#tech#debt#long#level#adding#term

Discussion (9 Comments)Read Original on HackerNews
Sometimes, I'll catch it writing the same logic 2x in the same PR (recent example: conversion of MIME type to extension for images). At our scale, it is still possible to catch this and have these pulled out or use existing ones.
I've been mulling whether microservices make more sense now as isolation boundaries for teams. If a team duplicates a capability internally within that boundary, is it a big deal? Not clear to me.
For example if AI generates 2x of a utility function that does the same thing, yes that is not an ideal, but is also fairly minor in terms of tech debt. I think as long as all behaviors introduced by new code are comprehensively tested, it becomes less significant that there can be some level of code duplication.
It also is something that can be periodically caught and cleaned up fairly easily by an agent tasked to look for it as part of review and/or regular sessions to reduce tech debt.
A lot of this is adapting to new normal for me, and there is some level of discomfort here. But I think: if I were the director of an engineering org and I learned different teams under me had a number of duplicated utility functions (or even competing services in the same niche), would this bother me? Would it be a priority for me to fix? I think I’d prefer it weren’t so, but probably would not rise to the level of needing specific prioritization unless it impacted velocity and/or stability.
We still run into the same issues that this brings about in the first place, AI or no AI. When requirements change will it update both functions? If it is rewriting them because it didn't see it existed in the first place, probably not. And there will likely be slight variations in function / components names, so it wouldn't be a clean grep to make the changes.
It may not impact velocity or stability in the exact moment, but in 6 months or a year - it likely will, the classic trope of tech debt.
I have no solution for this, it's definitely a tricky balance and one that we've been struggling with human written code since the dawn.
With the obvious preface of "thoughtlessly adding." Of course it's not a real law, it's a tongue-in-cheek observation about how things have tended to go wrong empirically, and highlights the unique complexity of adding manpower to software vs. physical industry. Regardless, it has been endlessly frustrating for people to push AI/agentic development by highlighting short-term productivity, without making any real attempt to reconcile with these serious long-term technical management problems. Just apply a thick daub of Claude concealer, and ship.
Maybe people are right about the short-term productivity making it worthwhile. I don't know, and you don't either: not enough time has elapsed to falsify anything. But it sure seems like Fred Brooks was correct about the long-term technical management problems: adding Claudes to a late C compiler makes it later.
https://www.anthropic.com/engineering/building-c-compiler