Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

49% Positive

Analyzed from 2498 words in the discussion.

Trending Topics

#agents#things#agent#more#open#still#actually#user#don#same

Discussion (56 Comments)Read Original on HackerNews

tpurvesabout 4 hours ago
The conceptual problem is that we keep wanting to compare AI behavior to that of traditional computers. The proper comparison is comparing AI, and how we trust or delegate to it, to the concept of delegating to other humans or even to domestic animal. Employees can be trained and given very specific skills and guidelines but still have agency and non-deterministic behavior. A seeing eye dog, a pack mule or chariot horse will often, but not necessarily always do what you ask of them. We've only been delegating to deterministic programmable machines for very short part of human history. But ad human societies, we've been collectively delegating a lot of useful activities to non-perfectly-dependable agents (ie each other) for a very long time. As as humans we've gotten done more that a few notable things in the last several millennia with this method. However, humans as delegates or as delegators have also done a lot of horrific things at scale to, both by accident or by design. And meanwhile (gestures broadly around everywhere) maybe humans actually aren't doing such an optimal job of running and governing everything important in the world?

When compared to how human make a mess of things like in the real world, how high does the bar really need to be for trusting AI agents. Even far shy from perfect, AI could still be a step function improvement over trusting ourselves.

sigbottleabout 2 hours ago
> And meanwhile (gestures broadly around everywhere) maybe humans actually aren't doing such an optimal job of running and governing everything important in the world?

The issue with this is that you want to impune, in the grand scheme of things, a small few individuals. And so you want to institute an AI system. Which are controlled by the same individuals (or at least the same class of individuals, with the reach to abuse such a system).

I'll hear you out if AI becomes truly decentralized. Until then, no, this line of rhetoric is just justification for the surveillance state that's to come (to be fair, the surveillance state would pick yet another justification, regardless).

w10-1about 2 hours ago
Human delegation is disciplined as much by incentive alignment as instruction. The same is true for LLM's. The problem is that it's not possible to dominate intentions, LLM or human, because delegates/agents to be useful need autonomy.

The SOTA models are working on making them more capable and then adding guardrails for safety. It would be better to work on baking in incentive alignment, which probably means eliciting more incentive details from the LLM user. That's what I'd be working on at Apple, where the user might be induced to share a level of local-only details that could align the AI agents.

_aavaa_about 2 hours ago
> how high does the bar really need to be for trusting AI agents.

You can hold a human responsible for that they do; you can reward them, fire them, sue them, etc.

You cannot do any of those things with an LLM. The threat of termination means nothing to an LLM.

dd8601fnabout 1 hour ago
Totally agree. And I expect at some point people might come around on, “don’t pay for and use that tool for that particular job.”

Like, there isn’t enough hype the world to make people replace all knives, hammers, and screwdrivers with sawzalls. They have awesome utility for certain things and they’re a bad fit for other things.

Maybe we’ll get there with LLMs someday.

abdjdoekeabout 3 hours ago
Well AI agents thinking capabilities are inspired by our own “neural networks.” AI makes the same mistakes we do it’s just called different things.

How many people say something like, “if I recall correctly.” This statement emphasizes that we think we know, but we’re just adding that disclaimer to protect ourselves from cancel culture.

People call that “Hallucination” when talking about an AI. It’s not hallucination, it’s beautiful imperfection.

givemeethekeysabout 4 hours ago
A very talented junior employee that you can't trust with the keys.
GistNoesisabout 3 hours ago
The main difference is that this junior employee can't be held responsible if anything goes wrong. And the company which rented you this employee absolves itself from all responsibility too.

Here is a fresh example from today of what junior employee do when given unlimited agentic power : https://www.reddit.com/r/ClaudeAI/comments/1sv7fvc/im_a_nurs...

tossandthrowabout 3 hours ago
Your example is not from a Jr developer but from a free agent.

I think you will find it very hard to keep a Jr dev in a Corp responsible.

I actually think you will find that it is easier to work with agents at a higher quality and lower legal risk than using Jr developers.

And this is only going to be amplified when it becomes common knowledge that Ai poses less risk to projects, than Jr staff.

ozgrakkurtabout 3 hours ago
I understand you mean this as it is close to that in terms of getting the final work.

But in my opinion, it is not even remotely close to the reliability of an educated human, communication wise.

If you gave a research task to a less experienced person, you wouldn’t expect them to convincingly lie about details.

It is useful as a review tool or boilerplate generator but it is not the same aspect you would use a human from.

ipythonabout 4 hours ago
Who do you trust with the keys? In any well run organization you have multiple layers of controls. The same concept applies here and I think the gp commenter captured it very well.
givemeethekeysabout 3 hours ago
I think you'd trust someone with the keys when they've consistently shown that they can be trusted with less critical work. If you're having to constantly monitor someone's output, then promoting them is a liability.

The same applies to an AI model.

And, since the same model would be deployed by many teams, unexpected behavior from that model even for a small subset of those teams means that it can't be promoted.

pbronezabout 4 hours ago
Yes. I think you can get agents to “Conscious competence” with a lot of well-designed oversight, direction and control. It works, but it’s fragile - nothing like the judgement needed to handle novel situations well.

https://en.wikipedia.org/wiki/Four_stages_of_competence

cramsessionabout 5 hours ago
> You bought a laptop or desktop with an operating system, and it did what it said on the tin: it ran programs and stored files.

I feel like people may be viewing the past with rose colored glasses. Computing in the 90s meant hitting ctrl-s every 5 seconds because you never knew when the application you were using was going to crash. Most things didn't "just work", but required extensive tweaking to configure your ram, sound card... to work at all.

6keZbCECT2uBabout 5 hours ago
I remember when the computer crashed and the user hadn't saved recently, we blamed the user.
Groxxabout 4 hours ago
It's sad, but they should've compulsively hit save after every few letters - it's documented very clearly on page 404 of the manual. It's a real shame that such things couldn't be done automatically until recently, early-2000-era CPUs just weren't sophisticated enough to run advanced, reactive logic like that.
pikerabout 4 hours ago
Serializing a document was non-trivial for the first two decades of personal computing. Auto-save would have destroyed performance.
jjmarrabout 4 hours ago
My parents indoctrinated me as a child to constantly hit save because they grew up with that. It was a part of our cultural expectations for "basic life skills to teach children".
ameliusabout 5 hours ago
This is not just the past. I still have headaches configuring my video card to work with the right CUDA drivers, etc.

The tower of abstractions we're building has reached a height that actually makes everything more fragile, even if the individual pieces are more robust.

kirubakaranabout 5 hours ago
We just need one more layer of abstraction to fix that, and everything will be fine
willmaddenabout 4 hours ago
I'm vibe coding this presently. Update soon.
algoth1about 4 hours ago
Manually editing config files thanks to an obscure thread so that your printer can actually be recognized by the OS
justincliftabout 4 hours ago
> Computing in the 90s meant hitting ctrl-s every 5 seconds because you never knew when the application you were using was going to crash.

That was in the Windows world. Maybe in the Mac world too?

No so much in the *nix world.

Windows seems to have improved its (crash) reliability since then though, which I suppose is nice. :)

kgwgkabout 2 hours ago
And yet the word "crash" appears quite often in The UNIX-HATERS Handbook: https://web.mit.edu/~simsong/www/ugh.pdf
borskiabout 4 hours ago
Wait, I literally still hit Ctrl-S constantly, usually a few times in a row.

Have people outgrown this unnecessary habit? Haha

nacozarinaabout 4 hours ago
lol be honest that lunacy was unique to Microsoft, never had to do that with FrameMaker on SunOS
_pukabout 4 hours ago
And then having to learn ctrl-q the minute you started working in the shell..

Muscle memory is a bitch!

jrm4about 4 hours ago
Still though -- once you got a workflow, no matter how terrible, it strongly tended to continue to work that way, and it was still much easier to diagnose, fix, and just generally not have unexpected behavior.

This is the issue; agents introduce more unexpected behavior, at least for now.

My gut is that always on "agents who can do things unexpectedly" are a dead-end, but what AI can do is get you to a nice AND predictable "workflow" easier.

e.g. for now I don't like AI for dealing with my info, but I love AI helping me make more and better bash scripts, that deal with my info.

mikert89about 4 hours ago
alot of software engineering, especially in complex systems, is still just tweaking retries, alarms, edge cases etc. it might take 3 days to even figure out what went wrong
danarisabout 3 hours ago
I don't know what computers you were using.

I had occasional crashes, sure, but unless you had some very dodgy computers, it seems like you're overcorrecting for those supposed rose-colored glasses.

I never knew anyone in the '90s who was constantly living in fear of their programs crashing and losing their work.

hrimfaxiabout 3 hours ago
Were they using Word? That was absolutely a fear of mine at the time.
danarisabout 1 hour ago
I used Word—mainly on Mac, version 4.01—all through middle and high school for my homework, and never had any particular problems with it. Frankly, I think it was much more stable then than it is now.
hnavabout 5 hours ago
Quality issues are a different vertical within the space of software/user misalignment. The sort of issue the author talks about is more like the malware of the 90-00s era: the software deliberately does something to screw the user.
moralestapiaabout 4 hours ago
Hmm ... no?

I used computers back then and many things just worked fine. I found Windows XP way more predictable and stable than any of its successors.

echelonabout 5 hours ago
> Computing in the 90s meant hitting ctrl-s every 5 seconds because you never knew when the application you were using was going to crash.

THIS.

I lost so much work in the 90s and 00s. I was a kid, so I had patience and it didn't cost me any money. I can't imagine people losing actual work presentations or projects.

Every piece of software was like this. It was either the app crashing or Windows crashing. I lost Flash projects, websites, PHP code.

Sometimes software would write a blank buffer to file too, so you needed copies.

Version control was one of my favorite discoveries. I clung to SVN for the few years after I found it.

My final major loss was when Open Office on Ubuntu deleted my 30 page undergrad biochem thesis I'd spent a month on. I've never used it since.

algoth1about 4 hours ago
Open Office on Ubuntu 11.10 user here. I can confirm it froze frequently and you would lose everything. it was incredibly frustrating
jeffreygoestoabout 4 hours ago
Windows 95 Word was also bad. Some poor non-CS student brought his thesis to our computer pool and worked from Floppy with the only copy he had. Panic mode on when the backup file and original did not fit on that floppy any more and Word asked to swap disks for an empty one. We advised him to just continue swapping, eventually Word will have that backup file on the other disk. It worked after an ennerving amount of floppy swaps...
tptacekabout 3 hours ago
This feels like the modern incarnation of "packet intent", the mythical security property of knowing what an incoming request is trying to do rather than what it is. Variants of "packet intent" have been sought after going all the way back into the 1980s; it's helpful to recognize the idea when it appears, because it's a reliable marker of what you can't realistically accomplish.
Legend2440about 2 hours ago
Except agents actually have an intent, and can route around obstacles to accomplish that intent.

If you merely block a specific action, they will find another way to do what they're trying to do. Agent security requires controlling the agent's intent.

sigbottleabout 2 hours ago
The goal behind most "clean" software design in general is to eliminate the possibility of failure via constraints. That's the pattern I've seen over the years. Of course, the map is not the territory - you need to make sure the reachable set within the constraints is actually a subset of the real reachable set. Which may be underspecified or unknown a priori (as if you could've really specified the true reachable set, why didn't you just encode those rules?)

So I'm sympathetic to the criticism, especially since composition of formal methods & analyzing their effects is still very much a hard problem (and not just computationally - philosophically, often, for the reason I listed above).

That being said, I don't know a better solution. Begging the agent with prompts doesn't work. Are you suggesting some kind of mechanistic interpretability, maybe?

aykutsekerabout 4 hours ago
been building on claude code for a while. the post's framing is right.

mcp gives you open standards on the tool layer but the harness (claude code, cursor) is still proprietary. your product is one anthropic decision away from breaking.

the user agent role the post calls for needs open harnesses, not just open standards. otherwise we end up rebuilding mobile under a new name.

phillc73about 4 hours ago
These are already available. Mistral’s Vibe CLI[1] is open source. Tools like goose[2]are API agnostic.

[1] https://github.com/mistralai/mistral-vibe

[2] https://goose-docs.ai/

dbmikusabout 2 hours ago
aykutsekerabout 4 hours ago
thanks, will try both.

if you've actually migrated an existing claude code setup to one of them, curious how the portability story worked. that's the part i'd been worried about.

SyneRyderabout 2 hours ago
I've not tried actually migrating from Claude Code... but having played a bit with other clients, I would avoid Mistral Vibe. I want to love it, there's some things that are nice about Vibe (mostly just "oui oui baguette"), but the things I did not like about it were disastrously bad. I could barely get MCP servers configured, and it was in something of a broken state even when I did get it working. I have many words about how horrified I am at how far behind Mistral is, but I will spare the rant.

OpenCode is another one to consider looking at: https://opencode.ai/ Not sure I'd recommend it, but it's worthy of consideration, as is Pi.

Also, consider that you can build your own. I've got Claude Code in the background working on improvements to my own harness (just for myself) at the moment. Though my intention is to have a mini API-only Claude Code that I can use on retro machines that don't support it, I don't need a full Claude Code feature set.

phillc73about 3 hours ago
Sorry, I can’t help with that, as I’ve never actually used Claude. Started with Mistral and stuck there (they still have a free API tier, but I ended up buying their Le Chat Pro service anyway, for the image generation).
boredatomsabout 3 hours ago
hrimfaxiabout 2 hours ago
opencode is open source
durchabout 3 hours ago
The framing assumes the agent can reliably represent its principal, and I'm not convinced that holds even if you get everything else right.

The problem is that the agent itself is the attack surface. An adversary who controls the communication channel can manipulate what the agent believes about who it's talking to, which means anything it holds, its list of authorized actions, a shared secret you gave it, whatever, can be exfiltrated in ways the agent can't detect because the manipulation happens below the layer where it can reason about trust.

Open harnesses and open standards help but they don't close this gap, because the thing you need to trust, the agent's own judgment about its principal, is exactly what gets compromised. The trust chain has to go below software entirely: hardware attestation, signed commands with keys the agent can verify but never access. That's really an OS problem dressed up as an agent architecture problem.

ryandrakeabout 4 hours ago
The thing I don’t like about “agents” is that I consider my computer a tool that I use and control. I don’t want it doing things for me: I want to do things through it. I want to be in the driver’s seat. “Notifications” and “Assistants” and now “Agents” break this philosophy. Now there are these things doing “stuff” on my computer for me and I’m just a passenger along for the ride. A computer should be that “bicycle for the mind” as Jobs put it, not some autonomous information-chauffeur, spooning output into my mouth.
ArielTMabout 4 hours ago
The browser analogy holds because publishers wanted browsers. Sites lived with User-Agent and robots.txt because the click paid for it.

AI agents are the destination. No return click to bargain with. That's why Cloudflare just went default-block + 402 Payment Required instead of waiting on a standards body.

Open standards on the agent side are the easy half. Getting sites to show up is the part W3C can't fix alone.

mwkaufmaabout 3 hours ago
First half: relatively cogent diagnosis of understood problems in computer privacy.

Second half: specious claims about AI mostly based on a vague "we don't know what they can do, so maybe they can do anything?" rhetorical maneuver.

zbyabout 4 hours ago
I like how the author notices that it really got a start with cloud computing.
aeon_aiabout 4 hours ago
The most important thing we can do for AI to be a net positive to society is to ensure that its loyalty is to the user, and not the state.

There is no legitimate intermediate position - The skew will go one way or the other.

trvzabout 4 hours ago
This is a silly thing to say.

Such a thing can’t be enforced and it can be flipped on a dime.

You should play around with local LLMs and system prompts to experience it.

cyanydeezabout 5 hours ago
i think whats missing is the raison detre of the Agents isnt a new usecase, its a context prune for the same limitations LLMs provide. LLM as Agent is a subset, where the goal of the agent is set by the parent and is suppose to return a pruned context.

if you dont recognize the technical limitations that produced agents youre wearing rose tinted glasses. LLMs arent approaching singularity. theyre topping out in power and agents are an attempt to exentend useful context.

The sigmoid approacheth and anyone of merit should be figuring out how the harness spits out agents, intelligently prunes context then returns the best operational bits, alongside building the garden of tools.

Its like agents are the muscles, the bones are the harness and the brain is the root parent.

Advertisement