Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

50% Positive

Analyzed from 639 words in the discussion.

Trending Topics

#code#same#without#utility#tech#debt#long#level#adding#term

Discussion (8 Comments)Read Original on HackerNews

DonsDiscountGas19 minutes ago
It's worth considering whether that's really a problem. With AI, It's easier to reinvent the wheel 1000 times then get 1000 people together and agree on requirements (which most people would then modify).
cowartcabout 1 hour ago
This is a symptom of the problem. The real issue is that everyone is running off and building their own thing without tying back to a north star and coordinating. I've seen this play out before in a F200. Tooling proliferation resolves itself once everything is driving towards the same goal and owns it. Without that, you're just duplicating symptoms.
CharlieDigitalabout 3 hours ago
I have found this at a different scale in our company: agents keep writing the same private static utility methods over and over again without checking for it in existing code.

Sometimes, I'll catch it writing the same logic 2x in the same PR (recent example: conversion of MIME type to extension for images). At our scale, it is still possible to catch this and have these pulled out or use existing ones.

I've been mulling whether microservices make more sense now as isolation boundaries for teams. If a team duplicates a capability internally within that boundary, is it a big deal? Not clear to me.

appplicationabout 2 hours ago
Let me preface this by saying I have been writing very exacting code for most of my career to a high standard. But with AI generated code, I’m not sure if all the same value-prop exists that were used to with traditional hand-written code.

For example if AI generates 2x of a utility function that does the same thing, yes that is not an ideal, but is also fairly minor in terms of tech debt. I think as long as all behaviors introduced by new code are comprehensively tested, it becomes less significant that there can be some level of code duplication.

It also is something that can be periodically caught and cleaned up fairly easily by an agent tasked to look for it as part of review and/or regular sessions to reduce tech debt.

A lot of this is adapting to new normal for me, and there is some level of discomfort here. But I think: if I were the director of an engineering org and I learned different teams under me had a number of duplicated utility functions (or even competing services in the same niche), would this bother me? Would it be a priority for me to fix? I think I’d prefer it weren’t so, but probably would not rise to the level of needing specific prioritization unless it impacted velocity and/or stability.

mikodin18 minutes ago
> For example if AI generates 2x of a utility function that does the same thing, yes that is not an ideal, but is also fairly minor in terms of tech debt. I think as long as all behaviors introduced by new code are comprehensively tested, it becomes less significant that there can be some level of code duplication.

We still run into the same issues that this brings about in the first place, AI or no AI. When requirements change will it update both functions? If it is rewriting them because it didn't see it existed in the first place, probably not. And there will likely be slight variations in function / components names, so it wouldn't be a clean grep to make the changes.

It may not impact velocity or stability in the exact moment, but in 6 months or a year - it likely will, the classic trope of tech debt.

I have no solution for this, it's definitely a tricky balance and one that we've been struggling with human written code since the dawn.

minimallyabout 3 hours ago
LeCompteSftwareabout 3 hours ago
Brooks's Law: Adding manpower to a late software project makes it later. https://en.wikipedia.org/wiki/The_Mythical_Man-Month

With the obvious preface of "thoughtlessly adding." Of course it's not a real law, it's a tongue-in-cheek observation about how things have tended to go wrong empirically, and highlights the unique complexity of adding manpower to software vs. physical industry. Regardless, it has been endlessly frustrating for people to push AI/agentic development by highlighting short-term productivity, without making any real attempt to reconcile with these serious long-term technical management problems. Just apply a thick daub of Claude concealer, and ship.

Maybe people are right about the short-term productivity making it worthwhile. I don't know, and you don't either: not enough time has elapsed to falsify anything. But it sure seems like Fred Brooks was correct about the long-term technical management problems: adding Claudes to a late C compiler makes it later.

  The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
https://www.anthropic.com/engineering/building-c-compiler
dannersyabout 1 hour ago
Honestly, it isn't particularly good at greenfield stuff. It's a mess all around. Seeing the claims that AI is better at code than humans has been a failure for my direct experience and those I know who work at larger companies. Incident and regression rates are up exponentially and requiring more people to play wack-a-mole.