FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
64% Positive
Analyzed from 3074 words in the discussion.
Trending Topics
#code#don#more#changes#agent#never#coding#doing#every#llms

Discussion (92 Comments)Read Original on HackerNews
1. I have no real understanding of what is actually happening under the hood. The ease of just accepting a prompt to run some script the agent has assembled is too enticing. But, I've already wiped a DB or two just because the agent thought it was the right thing to do. I've also caught it sending my AWS credentials to deployment targets when it should never do that.
2. I've learned nothing. So the cognitive load of doing it myself, even assembling a simple docker command, is just too high. Thus, I repeatedly fallback to the "crutch" of using AI.
It has about doubled my development pace. An absolutely incredible gain in a vacuum, though tiny compared to what people seem to manage without these self-constraints. But in exchange, my understanding of the code is as comprehensive as if I had paired on it, or merged a direct report's branch into a project I was responsible for. A reasonable enough tradeoff, for me.
https://vivekhaldar.com/articles/when-compilers-were-the--ai...
We are completely comfortable now letting the compilers do their thing, and never seem to worry that we "don't know what is actually happening under the hood".
I am not saying these situations are exactly analogous, but I am saying that I don't think we can know yet if this will be one of those things that we stop worrying about or it will be a serious concern for a while.
Day 1: Carefully handles the creds, gives me a lecture (without asking) about why .env should be in .gitignore and why I should edit .env and not hand over the creds to it.
Day 2: I ask for a repeat, has lost track of that skill or setting, frantically searches my entire disk, reads .env including many other files, understands that it is holding a token, manually creates curl commands to test the token and then comes back with some result.
It is like it is a security expert on Day 1 and absolute mediocre intern on Day 2
( This was low-stakes test creds anyway which I was testing with thankfully. )
I never pass creds via env or anything else it can access now.
My approach now is to get it to write me linqpad scripts, which has a utility function to get creds out of a user-encrypted share, or prompts if it's not in the store.
This works well, but requires me to run the scripts and guide it.
Ultimately, fully autotonous isn't compatible with secrets. Otherwise, if it really wanted to inspect it, then it could just redirect the request to an echo service.
The only real way is to deal with it the same way we deal with insider threat.
A proxy layer / secondary auth, which injects the real credentials. Then give claude it's own user within that auth system, so it owns those creds. Now responsibilty can be delegated to it without exposing the original credentials.
That's a lot of work when you're just exploring an API or DB or similar.
Only helps if we listen to it :) which is fun b/c it means staying sharp which is inherently rewarding
I'm not trying to be offense, so with all due respect... this sounds like a "you" problem. (And I've been there, too)
You can ask the LLMs: how do I run this, how do I know this is working, etc etc.
Sure... if you really know nothing or you put close to zero effort into critically thinking about what they give you, you can be fooled by their answers and mistake complete irrelevance or bullshit for evidence that something works is suitably tested to prove that it works, etc.
You can ask 2 or 3 other LLMs: check their work, is this conclusive, can you find any bugs, etc etc.
But you don't sound like you know nothing. You sound like you're rushing to get things done, cutting corners, and you're getting rushed results.
What do you expect?
Their work is cheap. They can pump out $50k+ worth of features in a $200/mo subscription with minimal baby-sitting. Be EAGER to reject their work. Send it back to them over and over again to do it right, for architectural reviews, to check for correctness, performance, etc.
They are not expensive people with feelings you need to consider in review, that might quit and be hard to replace. Don't let them cut corners. For whatever reason, they are EAGER to cut corners no matter how much you tell them not to.
> python <<'EOF'
> ${code the agent wrote on the spot}
> EOF
I mean, yeah, in theory it's just as dangerous as running arbitrary shell commands, which the agent is already doing anyway, but still...
I've spent far more time pitting one AI context against another (reviewing each other's work) than I have using AI to build stuff these days.
The benefit is that since it mostly happens asynchronously, I'm free to do other stuff.
I can't help but read complaints about the capabilities of AI – and I'm certainly not accusing you of complaining about AI, just a general thought – and think "Yet" to myself every time.
I think they're in here, last edited 8 months ago: https://github.com/nreHieW/fyp/blob/5a4023e4d1f287ac73a616b5...
The cynic in me thinks it's done on purpose to burn more tokens. The pragmatist however just wants full control over the harness and system prompts. I'm sure this could be done away with if we had access to all the knobs and levers.
I guess it comes down to how ossified you want your existing code to be.
If it's a big production application that's been running for decades then you probably want the minimum possible change.
If you're just experimenting with stuff and the project didn't exist at all 3 days ago then you want the agent to make it better rather than leave it alone.
Probably they just need to learn to calibrate themselves better to the project context.
The idea being that if you're working in an area, you should refactor and tidy it up and clean up "tech debt" while there.
In practice, it was seldom done, and here we have LLMs actually doing it, and we're realising the drawbacks.
At times even when a function is right there doing exactly what's needed.
Worse, when it modifies a function that exists, supposedly maintaining its behavior, but breaks for other use cases. Good try I guess.
Worst. Changing state across classes not realising the side effect. Deadlock, or plain bugs.
"Refactor-as-you-go" means to refactor right after you add features / fix bugs, not like what the agent does in this article.
If LLMs are doing sensible and necessary refactors as they go then great
I have basically zero confidence that is actually the case though
This is horrible practice, and very typical junior behavior that needs to be corrected against. Unless you wrote it, Chesterton's Fence applies; you need to think deeply for a long time about why that code exists as it does, and that's not part of your current task. Nothing worse than dealing with a 1000 line PR opened for a small UI fix because the code needed to be "cleaned up".
Tech debt needs to be dealt with when it makes sense. Many times it will be right there and then as you're approaching the code to do something else. Other times it should be tackled later with more thought. The latter case is frequently a symptom of the absence of the former.
In Extreme Programming, that's called the Boy Scouting Rule.
https://furqanramzan.github.io/clean-code-guidelines/princip...
I suspect AI's learned to do this in order to game the system. Bailing out with an exception is an obvious failure and will be penalized, but hiding a potential issue can sometimes be regarded as a success.
I wonder how this extrapolates to general Q&A. Do models find ways to sound convincing enough to make the user feels satisfied and the go away? I've noticed models often use "it's not X, it's Y", which is a binary choice designed to keep the user away from thinking about other possibilities. Also they often come up with a plan of action at the end of their answer, a sales technique known as the "assumptive close", which tries to get the user to think about the result after agreeing with the AI, rather than the answer itself.
Codex also has a tendency to apply unwanted styles everywhere.
I see similar tendencies in backend and data work, but I somehow find it easier to control there.
I'm pretty much all in on AI coding, but I still don't know how to give these things large units of work, and I still feel like I have to read everything but throwaway code.
Purely anecdotal.
"Do not modify any code; only describe potential changes."
I often add it to the end when prompting to e.g. review code for potential optimizations or refactor changes.
In this case I would ask for smaller changes and justify every change. Have it look back upon these changes and have it ask itself are they truly justified or can it be simplified.
With LLMs, you glimpse a distant mountain. In the next instant, you're standing on its summit. Blink, and you are halfway down a ridge you never climbed. A moment later, you're flung onto another peak with no trail behind you, no sense of direction, no memory of the ascent. The landscape keeps shifting beneath your feet, but you never quite see the panorama. Before you know it, you're back near the base, disoriented, as if the journey never happened. But confident, you say you were on the top of the mountain.
Manual coding feels entirely different. You spot the mountain, you study its slopes, trace a route, pack your gear. You begin the climb. Each step is earned steadily and deliberately. You feel the strain, adjust your path, learn the terrain. And when you finally reach the summit, the view unfolds with meaning. You know exactly where you are, because you've crossed every meter to get there. The satisfaction isn't just in arriving, nor in saying you were there: it is in having truly climbed.
With LLM-assisted coding, you skip the trek and you instantly know that’s not it.
I am surprised Gemini 3.1 Pro is so high up there. I have never managed to make it work reliably so maybe there's some metric not being covered here.
The solution to this is to use quality gates that loop back and check the work.
I'm currently building a tool with gates and a diff regression check. I haven't seen these problems for a while now.
https://github.com/tim-projects/hammer
The version it puts down into documents is not the thing it was actually doing. It's a little anxiety-inducing. I go back to review the code with big microscopes.
"Reproducibility" is still pretty important for those trapped in the basements of aerospace and defense companies. No one wants the Lying Machine to jump into the cockpit quite yet. Soon, though.
We have managed to convince the Overlords that some teensy non-agentic local models - sourced in good old America and running local - aren't going to All Your Base their Internets. So, baby steps.
I think we should move to semi-autonomous steerable agents, with manual and powerful context management. Our tools should graduate from simple chat threads to something more akin to the way we approach our work naturally. And a big benefit of this is that we won't need expensive locked down SOTA models to do this, the open models are more than powerful enough for pennies on the dollar.
How do you emulate that with llm’s? I suppose the objective is to get variance down to the point it’s barely noticeable. But not sure it’ll get to that place based on accumulating more data and re-training models.
Counterpoint: no it isn't
> makes this job dramatically harder
No it doesn't
Too many people are treating the tools as a complete replacement for a developer. When you are typing a text to someone and Google changes a word you misspelled to a completely different word and changes the whole meaning of the text message do you shrug and send it anyway? If so, maybe LLMs aren't for you.