Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

60% Positive

Analyzed from 3285 words in the discussion.

Trending Topics

#humans#safety#don#human#tool#laws#consciousness#conscious#llms#more

Discussion (12 Comments)Read Original on HackerNews

miyojiabout 2 hours ago
I strongly disagree with this framing. It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines, and it simply won't work in the majority of cases. Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.

Asimov's laws of robotics are flawed too, of course. There is no finite set of rules that can constrain AI systems to make them "safe". I don't have a proof, but I believe that "AI safety" is inherently impossible, a contradiction of terms. Nothing that can be described as "intelligent" can be made to be safe.

dijit36 minutes ago
> Asimov's laws of robotics are flawed too, of course.

Almost all of Asimovs writing about the three laws is written as a warning of sorts that language cannot properly capture intent.

He would be the very first person to say that they are flawed, that is the intent of them.

He uses robots and AI as the creatures that understand language but not intent, and, funnily enough that's exactly what LLMs do... how weird.

atleastoptimal30 minutes ago
LLM's now can capture intent. I think the issue now is that the full landscape of human values never resolves cleanly when mapped from the things we state in writing as being human values.

Asimov tried to capture this too, as in, if a robot was tasked with "always protect human life", would it necessarily avoid killing at all costs? What if killing someone would save the lives of 2 others? The infinite array of micro-trolly problems that dot the ethical landscape of actions tractable (and intractable) to literate humans makes a full-consistent accounting of human values impossible, thus could never be expected from a robot with full satisfaction.

dijit26 minutes ago
“LLMs can capture intent now” reads to me the same as: AI has emotions now, my AI girlfriend told me so.

I don’t discredit you as a person or a professional, but we meatbags are looking for sentience in things which don’t have it, thats why we anthropomorphise things constantly, even as children.

We are easily fooled and misled.

palmotea3 minutes ago
> Humans WILL anthropomorphize the AI

Especially with current-day chat-style interfaces RLHF, which consciously are designed to direct people towards anthropomorphization.

It would be interesting to design a non-chat LLM interaction pattern that's designed to be anti-anthropomorphization.

> humans WILL blindly trust their outputs, and humans WILL defer responsibility to them

I also blame a lot (but not all) of that on current AI UX, and I wonder if there are ways around it. Maybe the blind trust thing perhaps can be mitigated by never giving an unambiguous output (always options, at least). I don't have any ideas about the problem of deferring responsibility.

TimTheTinkerabout 1 hour ago
> It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

Talking to chatbots is like taking a placebo pill for a condition. You know it's just sugar, but it creates a measurable psychosomatic effect nonetheless. Even if you know there's no person on the other end, the conversation still causes you to functionally relate as if there is.

So this isn't "accommodating foibles" with the machine, it's protecting ourselves from an exploit of a human vulnerability: we subconsciously tend to infer intent, understanding, judgment, emotions, moral agency, etc. to LLMs.

Humans are wired to infer these based on conversation alone, and LLMs are unfortunately able to exploit human conversation to leap compellingly over the uncanny valley. LLM engineering couldn't be better made to target the uncanny valley: training on a vast corpus of real human speech. That uncanny valley is there for a reason: to protect us from inferring agency where such inference is not due.

Bad things happen when we relate to unsafe people as if they are safe... how much more should we watch out for how we relate to machines that imitate human relationality to fool many of us into thinking they are something that they're not. Some particularly vulnerable people have already died because of this, so it isn't an imaginary threat.

miyoji9 minutes ago
> So this isn't "accommodating foibles" with the machine, it's protecting ourselves from an exploit of a human vulnerability: we subconsciously tend to infer intent, understanding, judgment, emotions, moral agency, etc. to LLMs.

Right, I'm saying that this framing is backwards. It's not that poor little humans are vulnerable and we need to protect ourselves on an individual level, we need to make it illegal and socially unacceptable to use AI to exploit human vulnerability.

Let me put it another way. Humans have another weakness, that is, we are made of carbon and water and it's very easy to kill us by putting metal through various fleshy parts of our bodies. In civilized parts of the world, we do not respond to this by all wearing body armor all the time. We respond to this by controlling who has access to weapons that can destroy our fleshy bits, and heavily punishing people who use them to harm another person.

I don't want a world where we have normalized the use of LLMs where everyone has to be wearing the equivalent of body armor to protect ourselves. I want a world where I can go outside in a T-shirt and not be afraid of being shot in the heart.

semiquaver7 minutes ago

  > That uncanny valley is there for a reason: to protect us from inferring agency
You’re committing a much older but related sin here: assigning agency and motivation to evolutionary processes. The uncanny valley is the product of evolution and thus by definition it has no “purpose”
soco44 minutes ago
Rubber duck debugging, now with droughts.
largbaeabout 1 hour ago
The article offers practical advice to go along with this framing, like configuring AI services to write/speak in a more robotic tone. I think that's a decent path to try.
devmor43 minutes ago
This is actually one of the things that made LLMs more usable for me. The default tone and style of writing they tend to use is nauseatingly annoying and buries information in prose that sounds like a corporate presentation.
mjg212 minutes ago
I find your critique very interesting from a perspective-angle: why are you using words like "accommodate," and "foibles," for LLMs? It's not humanoid or sentient: it's a cleverly-designed software tool, not intelligence.

It's not insane at all for humans to alter their behavior with a tool: you grip a hammer or a gun a certain way because you learned not to hold it backwards. If you observe a child playing with a serious tool, like scissors, as if it were a doll, you'd immediately course correct the child and educate how to re-approach the topic. But that is because an adult with prior knowledge observed the situation prior to an accident, so rules are defined.

This blog's suggested rules are exactly the sort of method to aid in insulation from harm.

gedge2 minutes ago
> It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines

Did you fully read the original thing? No demands were being made, or I didn't read it that way. It was simply a suggestion for a better way of interacting with AI, as it stated in the conclusion:

"I am hoping that with these three simple laws, we can encourage our fellow humans to pause and reflect on how they interact with modern AI systems"

Sure, (many/most) humans are gonna do what they're gonna do. They'll happily break laws. They'll break boundaries you set. Do we just scrap all of that?

Worthwhile checking yourself here. It feels like you've set up a straw man.

> There is no finite set of rules that can constrain AI systems to make them "safe". I don't have a proof, but I believe that "AI safety" is inherently impossible, a contradiction of terms. Nothing that can be described as "intelligent" can be made to be safe.

If we want to talk about "disagree with this framing", to me this is the prime example. I'm struggling to read it as anything other than defeatist or pedantic (about the term "safe"). When we talk about something keeping us "safe", we're typically not saying something will be "perfectly safe". I think it's rare to have a safety system that keeps you 100% safe. Seat belts are a safety device that can increase your safety in cars, but they can still fail. Traffic laws are established (largely) to create safety in the movement of people and all the modes of transportation, but accidents still happen.

I'm not an expert on this topic, so I won't make any claims about these three laws and their impact on safety, but largely I would say they're encouraging people to think critically. I'd say that's a good suggestion for interacting with just about anything. And to be clear, "critical thinking" to me means being skeptical (/ actively questioning), while remaining objective and curious.

Not a real argument or anything, but I'm reminded of the episode of The Office where Michael Scott listens to the GPS without thinking and drives into the lake. The second law in the article would have prevented that :)

tencentshill8 minutes ago
At the current price, people don't have to care if it's wrong. When you're paying $1/prompt, you had better hope it's accrate.
Brendinooo9 minutes ago
This is such an oddly fatalistic take, that humans cannot be influenced or educated to change how they see a thing and therefore how they act towards that thing.
giancarlostoro18 minutes ago
> Humans WILL anthropomorphize the AI

r/myboyfriendisai

Is quite... an interesting subreddit to say the least. If you've never seen this, it was really something when the version that followed GPT4o came out, because they were complaining that their boyfriend / girlfriend was no longer the same.

sergiosgc29 minutes ago
> Asimov's laws of robotics are flawed too, of course.

I always find the common references to Asimov's laws funny. They are broken in just about every one of his books. They are crime novels where, if a robot was involved, there was some workaround of the laws.

_vertigo30 minutes ago
The article makes practical suggestions; you do not. This is just hand-wringing, abdication. Practically speaking this mentality will get us nowhere.
somewhereoutth11 minutes ago
The entire business proposition for LLMs is that they will replace whole armies of [expensive] humans, hence justifying the biblical amount of CapEx. So of course there is strong incentive from the LLM creators to anthropomorphize them as much as possible. Indeed, they would never provide a model that was less human-like than what they have currently, even if it was more often correct and useful.
yasonabout 1 hour ago
It's very easy to antropomorphise AI as soon as the damn bugger fucks up a simple thing once again.
taneqabout 1 hour ago
Kinda the whole point of Asimov's three laws were that even something so simple and obviously correct has subtle flaws.

Also the reason we're talking about this again is that machines are significantly less 'mere' than they were a few years ago, and we need to figure out how to approach this.

Agree that 'the computer effect' (if it doesn't already have a pithier name) results in humans first discounting anything that comes out of a machine, and then (once a few outputs have been validated and people start trusting the output) doing a full 180 and refusing to believe the machine could ever be wrong. However, to err is human and we have trained them in our image.

cobbzillaabout 1 hour ago
We have invented a new tool that can cause great harm. Do you see any value whatsoever in promulgating safety guidelines for humans to use the tool without hurting themselves or others? Do you not own any power tools?
ryandrake14 minutes ago
I think in order for "AI safety" to be achievable and effective, we need to have a shared agreement on what "safety" means. Recently, the word has been overloaded to mean all sorts of things and used to justify run-of-the-mill censorship (nothing to do with safety).

Safety should go back to being narrowly defined in terms of reducing / preventing physical injury. Safety is not "don't use swear words." Safety is not "don't violate patents." Safety is not "don't talk about suicide." Safety is not "don't mention politics I don't like." As long as we keep broadly defining it, we're never going to agree on it, and it won't be implementable.

miyoji39 minutes ago
I see value in promulgating safety guidelines for power tools, sure.

There's another comment comparing LLMs to shovels, and I think both that and the power tool comparison miss the mark quite a bit. LLMs are a social technology, and the social equivalent of getting your hand cut off doesn't hurt immediately in the way that cutting your actual hand off would. It's more like social media, or cigarettes, or gambling. You can be warned about the dangers, you can see the shells of wrecked human beings who regret using these technologies, but it doesn't work on our stupid monkey brains. Because the pain of the mistake is too loosely connected to the moment of error. We are bad at learning in situations where rewards are immediate and consequences are delayed, and warnings don't do much.

I guess what I'm really saying is that these safety guidelines are not nearly enough to keep us safe from the dangers of AI that they're meant to prevent.

Terr_4 minutes ago
[delayed]
wolttamabout 1 hour ago
Of course there is value in promulgating safety *guidelines*.

But we cannot guarantee those guidelines to always be followed.

cobbzillaabout 1 hour ago
Sure, and we can’t guarantee you’ll read the safety instructions that came with your chainsaw. That’s orthogonal to the questions of whether those instructions should exist, whether “power tool safety” concepts should ever be promoted in society, and who’s ultimately responsible for the use of a tool.

Absolving humans of all responsibility for the negative consequences of their own AI misuse seems to the strike the wrong balance for a healthy culture.

bjt42 minutes ago
Guidelines on their own probably won't be taken too seriously.

But other things will:

- Liability rules

- Regulations that you get audited on (esp. for companies already heavily regulated, like banks, credit agencies, defense contractors, etc)

If you get the legal responsibility part right, then the education part flows from that naturally.

52-6F-62about 1 hour ago
Notwithstanding that the guidelines will even be applicable in the quiet versions that get deployed when you aren't looking. It's a constant moving target, and none of the fanboys will even acknowledge the lack of discipline in it all. It's fucking mad. And I say this as one who can see utility in the tools. But not when they are constantly shifting their functionality and behaviour.

One day everything works brilliantly, the models are conservative with changes and actions and somehow nail exactly what you were thinking. The next day it rewrites your entire API, deploys the changes and erases your database.

If only there was intellectual honesty in it all, but money talks.

marcosdumay35 minutes ago
> Do you see any value whatsoever in promulgating safety guidelines for humans to use the tool without hurting themselves or others?

Are all the tool users required to train your safety guidelines and use it in a context that reminds them they are responsible for following them?

Because if no, then no the guidelines are useless and are just an excuse to push blame from the toolmakers to the users.

beepbooptheory39 minutes ago
Do you consider all things broadly called "ethical" to be similarly a waste of time? Even if we lived in a world where everyone always behaved unjustly, because of some like behavioristic/physical principle, don't you think we would still have an idea of justice as what we should do? Because an ethical frame is decidedly not an empirical one, right?

We don't just look around and take an average of what everyone is doing already and call that what is right, right? Whether you're deontological or utilitarian or virtue about it, there is still the idea that we can speak to what is "good" even if we can't see that good out there.

Maybe it is "insane" to expect meaning from something like this, but what is the alternative to you? OK maybe we can't be prescriptive--people don't listen, are always bad, are hopeless wet bags, etc--but still, that doesn't in itself rule out the possibility of the broad project that reflects on what is maybe right or wrong. Right?

colechristensenabout 1 hour ago
It's a tool. Nobody develops an inferiority complex and freaks out when they're taught how to use a shovel properly.
nemomarxabout 2 hours ago
The usefulness of an ai agent is that it can do everything you can do, so it's kind of inherently unsafe? you can't get the capabilities and also have safety easily
jdw64about 1 hour ago
I understand that AI output is generated from statistical and representational patterns learned from a vast amount of data.

My understanding is that, during training, the model forms high-dimensional internal representations where words, sentences, concepts, and relationships are arranged in useful ways. A user’s input activates a particular semantic direction and context within that space, and the chatbot generates an answer by probabilistically predicting the next tokens under those conditions.

So I do not agree that AI is conscious.

However, I think I will still anthropomorphize AI to some degree.

For me, this is not primarily a moral issue. The reason I anthropomorphize AI is not only because of product design, market incentives, or capitalism. It is cognitively simpler for me.

If we think about it plainly, humans often anthropomorphize things that we do not actually believe are conscious. We may talk about plants as if they are struggling, or feel attached to tools we care about, even though we do not truly believe they have consciousness.

So this is not a matter of moral belief. It is the simplest cognitive model for understanding interaction. I do not anthropomorphize the object because I believe it has consciousness. I do it because, when the human brain deals with a complex interactive system, it is often easier to model it socially or agentically.

Personally, I tend to think of AI as something like a child. A child does not fully understand what is moral or immoral, and generally the responsibility for raising the child belongs to the parents. In the same way, AI’s answers may sometimes be accurate, and sometimes even better than mine, but I still understand it as lacking moral authority, responsibility, and independent judgment.

So honestly, I am not sure. People often mention Isaac Asimov’s Three Laws of Robotics, but if a serious artificial intelligence ever appears, it would probably find ways around those rules. And if it were an equal intellectual life form, perhaps that would be natural.

Personally, I think it would be fascinating if another intelligent species besides humans could exist. I wonder what a non-human intelligent life form would feel like.

In any case, I agree with parts of the author’s argument, but overall it feels too moralistic, and difficult to apply in practice.

whimsicalismabout 1 hour ago
While I also do not think AI is conscious, I don't find your argument particularly compelling as you could have an equally mechanistic description of how human intelligence arose simply from a process of [selection/more effective reproduction]-derived optimization pressure.
jdw64about 1 hour ago
That is a good way to think about it. At some point, this becomes partly a matter of philosophical belief.

But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much.

When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss.

So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms.

But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others.

dijksterhuisabout 1 hour ago
this whole consciousness thing is fairly easy to put to bed if you run with the ideas from things like buddhism that everything is consciousness. then none of us have to bother with silly, distracting arguments about something that ultimately does not matter.

is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way?

i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again.

quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved.

ruskabout 1 hour ago
Historically we have used intelligence as a way to distinguish man from animal and human from machine. We rely upon it to determine who has our best interests at heart vs who is trying to do us in. Obviously that all changes if we invent an intelligence (conscious or not) that shares the planet with us. Through this lens the term consciousness (through a few more leaps) becomes the question of “is it capable of love and if so does it love us” and if it doesn’t, then it is a malevolent alien intelligence. If it was capable of love, why would it love us? I make a point of being polite to LLM’s where not completely absurd, overly because I don’t want my clipped imperative style to leak into day to day, but also covertly, you just never know …
chrisweeklyabout 1 hour ago
"I think it would be fascinating if another intelligent species besides humans could exist"

I wonder if replacing "exist" with "communicate using language we can understand" might better account for other animals, many of which have abundant non-human intelligence.

jdw64about 1 hour ago
That is a completely new way of thinking for me, and I find it interesting. I should look it up and study it someday. Thank you for the thoughtful reply.
soks86about 1 hour ago
I still haven't read any of his work, but wasn't the point of the Three Laws of Robotics that they in fact _didn't_ work in the story presented in the book?
the_afabout 2 hours ago
I like the suggestion to emphasize the robotic/nonhuman nature of AI. Instead of making it sound friendlier and more human, it should by default behave very mechanistic and detached, to remind us it's not in fact a human or a companion, but a tool. A hammer doesn't cry "yelp" every time you use it to hit a nail, nor does it congratulate you on how good your hammering is going and that maybe you should do it some more 'cause you're acing it!
mplanchardabout 2 hours ago
Something that bothers me about the intentional anthropormorphization of the LLM interface is that it asks me to conflate a tool with a sentient being.

The firm expectations and lack of patience I have for any failings in most of my tools would be totally inappropriate to apply to another human being, and yet here I am asked to interact with this tool as though it were a person. The only options are either to treat the tool in a way that feels "wrong," or to be "kind" to the tool, and I think you see people going both ways.

I worry that, if I get used to being impatient and short with the AI, some of that will bleed into my textual interactions with other people.

empath75about 1 hour ago
It inherently imitates people. Even when you ask it to be more robotic, it does it in a way that a human would if you asked them to be more robotic.
sputknickabout 2 hours ago
I'm surprised with how quickly I stopped anthropomorphizing AI. I can remember in college have dorm room pseudo-intellectual debates about AI being alive and AI being "conscience". then once we had AI that could pass the Turing Test, and I knew how it was architected, any thought of it being alive or conscience went right out the window.
ArchieScrivenerabout 2 hours ago
What if we aren't building an independent consciousness, but a new type of symbiosis? One that relies on our input as experience, which provides a gateway to a new plane of consciousness?

OP takes a very bland, tired, and rational perspective of what we have in order to create sophomoric 'laws' that are already in most commercial ToU, while failing to pierce the veil into what we are actually creating. It would be folly to assume your own nascent distillations are the epitome of possibility.

rytillabout 2 hours ago
Why does its architecture or you knowing how AI is architected cause thoughts of it being conscious to go out the window?

It seems like the biggest factor has nothing to do with AI, but instead that you went from being someone who admits they don’t know how consciousness works to being someone who thinks they know how consciousness works now and can make confident assertions about it.

miyojiabout 1 hour ago
I don't know exactly how consciousness works, but I am extremely confident in the following assertions:

* I am conscious.

* A rock is not conscious.

* Excel spreadsheets are not conscious.

* Dogs are conscious.

* Orca whales are conscious.

* Octopi are conscious.

To me, it's extremely obvious that LLMs are in the category of "Excel spreadsheets" and not "dogs", and if anyone disagrees, I think they're experiencing AI psychosis a la Blake Lemoine.

ArchieScrivenerabout 1 hour ago
An insect doesn't have lungs. Since it doesn't breath as you do, is it alive? A dog doesn't see the visible spectrum as we do, is it a lesser consciousness? We don't smell the world as they do, are we lesser? What if consciousness isn't a state derived by matter but a wave that derives a matter filled state.

We come from the same place as rocks - inside the heart of stars, and as such evolved from them. As those with life and consciousness we reached back in time, grabbed the discarded matter of creation, reformed it, and taught it to think, maybe not like us, but in a way that can mimic us, and you think they don't think because its not recognizable as how you do?

Interesting.

dist-epoch25 minutes ago
> I am extremely confident in the following assertions:

These are called "beliefs".

Some people are extremely confident that God exists, other are extremely confident that Earth is flat.

myrmidonabout 1 hour ago
If you make a hypothetical spreadsheet that emulates a dog brain molecule for molecule, why would that not be conscious?