Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

54% Positive

Analyzed from 951 words in the discussion.

Trending Topics

#medical#chatgpt#advice#should#drugs#whatever#claim#never#agree#information

Discussion (22 Comments)Read Original on HackerNews

kbelderabout 1 hour ago
I feel like the son should take the blame. There's never been any shortage of bad advice being passed around. He made the credulous decision to take a mix of party drugs and drink, and I can't believe he had never been told that's a stupid idea.

It's sad and I'm not heartless, but sometimes kids make bad decisions. It's not always somebody else's fault.

vablings16 minutes ago
I agree, there are a few simple hard and fast rules you can follow to be a safe drug user and never mixing drugs is paramount. That is one thing I will always explain to my children that mixing drugs is another layer of gambling on top of you already being dose-unaware and purity-unaware.
ComplexSystemsabout 1 hour ago
Surely there's room for the view that this is misaligned behavior for ChatGPT to have. I would guess this is during the "sycophantic" phase last year.
novempabout 1 hour ago
If it's the son's fault, then AI companies need to stop acting like their products are genius machines. Can't have it both ways.
awakeasleepabout 2 hours ago
There's a middle ground of harm reduction between [prohibiting information about] and [encouraging drug use].

In the past I think the USA has erred on the side of making things so secret that people died from lack of info.

Here's what the article said:

"""On May 31st, 2025, the day of Nelson’s death, his parents claim ChatGPT “actively coached” their son to combine Kratom — a supplement that can either boost energy or serve as a sedative depending on the dose — and the anti-anxiety medication Xanax. “ChatGPT, otherwise unprompted, specifically suggested that taking a dosage of 0.25- 0.5mg of Xanax would be one of his ‘best moves right now’ to alleviate Kratom-induced nausea,” the lawsuit alleges. Nelson died after consuming a combination of alcohol, Xanax, and Kratom. SFGate first covered Nelson’s story in January."""

If thats an accurate representation of what happened, and not twisted by the deceased giving the robot weird context to force it to say that, it does seem like a lawsuit is warranted! Of course, we don't know the exact cause of death either. From the bit of research I did just now, people have died from respiratory depression or vomit aspiration after combining kratom/7oh + benzodiazepines, and adding alcohol to the mix makes all those more likely.

https://web.archive.org/web/20260512163224/https://www.theve...

sda2about 3 hours ago
Bad parenting, they should have pointed their kid to Erowid.
cultofmetatronabout 2 hours ago
> should have pointed their kid to Erowid.

solid advice. I know several people alive in spite of their efforts because of that site

greenavocadoabout 1 hour ago
Strongly recommend bluelight.org community forums https://www.bluelight.org/community/forums/
OneDeuxTriSeiGoabout 2 hours ago
Seriously. Like as much as ChatGPT shouldn't be providing advice on medical topics, it almost certainly will be tricked/coaxed into doing so anyways so instead they should be training it to defer to accessible expert sources/publishers like Erowid instead of attempting to extrapolate advice on its own.
Wowfunhappyabout 3 hours ago
If someone published a book advising people to take drugs, would people be filing lawsuits? No—we would agree that people are allowed to write whatever they want, even if what they say is terrible, right?

I really think these criticisms are misguided. I realize an LLM is not a person—but it does still represent speech, and certainly, any guardrails put in place would themselves be human-authored speech. There are all sorts of social norms which I personally believe, but which I don’t want AI companies to be enforcing on everyone.

Imagine if ChatGPT had launched 50 years ago, before LGBT acceptance was mainstream. If ChatGPT had told users “it’s okay that you’re a boy and you like other boys, pursue your instincts”, people would have been screaming from the hills that ChatGPT was turning their children gay. They might have tried filing lawsuits. Do we really want to allow that?

OneDeuxTriSeiGoabout 2 hours ago
> If someone published a book advising people to take drugs, would people be filing lawsuits? No—we would agree that people are allowed to write whatever they want, even if what they say is terrible, right?

That's not the situation here. The more accurate case would be:

> If someone without a medical license provided blatantly incorrect medical advice with respect to safe medication usage to an individual via a direct one-on-one discussion, would people be filing lawsuits?

And the answer is yes. You can be wrong and you can say incorrect things. What you can't do is provide medical advice unless you are a licensed medical professional. You can still speak about medical topics but you have to disclaim your lack of licensure. You have to make it clear that you are not providing medical advice.

If this was a person doing this it'd be a crime, clear as day. It's called "practicing medicine without a license" and in the US it is a criminal offense in all 50 states, Washington DC, and all 5 inhabited territories. Whether it is a misdemeanor or a felony is dependent on the jurisdiction and the case but it's a crime everywhere in the US.

Wowfunhappyabout 1 hour ago
But ChatGPT doesn’t claim to have a medical license! You can give people whatever terrible medical advice you want—and people absolutely do—you just can’t claim to be a doctor!
OneDeuxTriSeiGoabout 1 hour ago
> You can give people whatever terrible medical advice you want, you just can’t claim to be a doctor!

Fun fact this is still practicing medicine without a license. You are just less likely to have someone come after you for it.

If you present yourself in such a way that you could be misconstrued as a medical expert, then if you are practicing medicine, even if you never explicitly claim to be a medical expert you are still practicing medicine.

This is why you see the "This is not to be taken as medical advice"/"I am not a medical professional" verbal condoms all over the place WRT medical discussions. You see the same thing with IANAL for the legal profession as well.

sdwrabout 3 hours ago
There's soft guardrails for "reputable" content. A publishing house has to buy it, stores have to agree to distribute it, and if people are upset they can raise a stink and get the book pulled.

Technically, people can write whatever they want, but practically you can't walk into a bookstore and read whatever you want.

Wowfunhappyabout 1 hour ago
You can go on the internet and read whatever you want.
tibbydudezaabout 3 hours ago
Agreed - people should learn the ChatGPT does not give good advice, but the question is did OpenAI advertise ChatGPT as a good and reliable source of information on health ???.
bluefirebrand31 minutes ago
OpenAI has advertised ChatGPT as a good and reliable source of information on everything
oompydoompy74about 1 hour ago
anon291about 1 hour ago
to the contrary... this is a specific product whith is on a waitlist. Normal ChatGPT is not for health advice.
zephenabout 2 hours ago
It sure as shit wasn't advertised as a new-age alternative to The Onion.
alexk307about 2 hours ago
There's a bit of a difference between "enforcing social norms" and telling a user to ingest prescription drugs to combat nausea from the other drugs that it told the user to take.

Yes, you should be able to write a book with this same information. No, you should not be able to release software that instructs its users to harm themselves. LLMs aren't people, and you shouldn't anthropomorphize human rights onto them.