The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
They have some responsibility because they’re selling and framing these as more than the better-tuned variant on Markov chain generators that they in fucking fact are, while offering access to them to anybody who signs up while knowing that many users misunderstand what they’re dealing with (in part because these companies’ hype-meisters, like Altman, are bullshitting us)
No, that's the level of responsibility they ought to have if they were releasing these models as products. As-is they've used a service model, and should be held to the same standards as if there were a human employee on the other end of the chat interface. Cut through the technical obfuscation. They are 100% responsible for the output of their service endpoints. This isn't a case of making a tool that can be used for good or ill, and it's not them providing some intermediary or messaging service like a forum with multiple human users and limited capacity for moderation. This is a direct consumer to business service. Treating it as anything else will open the floodgates to slapping an "AI" label on anything any organization doesn't want to be held accountable for.
It’s even more horrifying than only sharing his feelings with ChatGPT would imply.
It basically said: your brother doesn’t know you; I’m the only person you can trust.
This is absolutely criminal. I don’t even think you can claim negligence. And there is no amount of money that will deter any AI company from doing it again.
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.
He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.
We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.
When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.
But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.
Thank you. I am glad too, I sought help, and I got better. I think the state of mental health care is abysmal in a lot of places, and so I get the impulse to try to find help where ever you can. It's why this story actually hit me quite hard, especially after reading the case file.
For anyone reading that feels like that today. Resources do exist for those feeling low. In the short term, medication really helped me. And as trite as it is, things can get better. When I look back at my attempt it is crazy to me to see how far I've come.
This is a clear example of why the people claiming that using a chatbot for therapy is better than no therapy are... I'll be extremely generous and say misguided. This kid wanted his parents to know he was thinking about this and the chatbot talked him out of it.
Exactly right. It's totally plausible that someone could build a mental health chatbot that results in better outcomes than people who receive no support, but that's a hypothesis that can and should be tested and subject to strict ethical oversight.
How many of these cases exist in the other direction? Where AI chatbots have actively harmed people’s mental health, including possible to the point of self destructive behaviors or self harm?
A single positive outcome is not enough to judge the technology beneficial, let alone safe.
It’s way more common than you think. I’m in a bubble of anti-AI people and we can see people we know going down that road. My family (different bubble) knows people. Every group of people I know knows somebody doing this.
For context, my friends and family are in the northern Midwest. Average people, not early adopters of new technology.
idk dude if your technology encourages a teenager to kill itself and prevents him from alerting his parents via a cry for help, I don’t care how “beneficial” it is.
This is the same case that is being discussed, and your comment up-thread does not demonstrate awareness that you are, in fact, agreeing with the parent comment that you replied to. I get the impression that you read only the headline, not the article, and assumed it was a story about someone using ChatGPT for therapy and gaining a positive outcome.
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.
I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.
The Eliza effect is incredibly powerful, regardless of whether developers have spread the idea of AI consciousness or not. I don’t believe people would use LLMs with more detachment if developers had communicated different ideas. The Eliza effect is not new.
If I google something about suicide, I get an immediate notification telling me that life is worth living, and giving me information about my local suicide prevention hotline.
If I ask certain AI models about controversial topics, it'll stop responding.
AI models can easily detect topics, and it could have easily responded with generic advice about contacting people close to them, or ringing one of these hotlines.
This is by design. They want to be able to have the "AI as my therapist" use-case in their back pocket.
This was easily preventable. They looked away on purpose.
I agree with that to an extent, but how far should the AI model developers go with that? Like if I ask for advice on, let's say, making custom chef's knives then should the AI give me advice not to stab people? Who decides where to draw the line?
It's hard to see what is going on without seeing the actual chats, as opposed to the snippets in the lawsuit. A lot of suicidal people talk to these LLMs for therapy, and the reviews on the whole seem excellent. I'm not ready to jump on the bandwagon only seeing a handcrafted complaint.
Ironically though I could still see lawsuits like this weighing heavily on the sycophancy that these models have, as the limited chat excerpts given have that strong stench of "you are so smart and so right about everything!". If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".
I've been thinking recently there should probably be a pretty stringent onboarding assessment for these things, something you have to sit through and something that both fully explains what they are and how they work, but also provides an experience that removes the magic from them. I also wish they would deprecate 4o, I know 2 people right now who are currently reliant on it, when they paste me some of the stuff it says... sweeping agreement of wildly inappropriate generalization, I'm sure it's about to end a friends marriage.
Clearly ChatGPT should not be used for this purpose but I will say this industry (counseling) is also deeply flawed. They are also mis-incentivized in many parts of the world. And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right.
I really wish people in AI space stop the nonsense and communicate more clearly what these LLMs are designed to do. They’re not some magical AGI. They’re token prediction machines. That’s literally how they should frame it so gen pop knows exactly what they’re getting.
Counseling is (or should be) heavily regulated, and if a counselor had given advice about the logistics of whether a noose would hold it's weight, they'd probably be prosecuted.
They allowed this. They could easily stop conversations about suicide. They have the technology to do that.
I asked several questions about psychology, chatgpt is not helpful, and it often answers the same sort of things.
Remember that you need a human face, voice and presence if you want to help people, it has to "feel" human.
While it certainly can give meaningful information about intellectual subjects, emotionally and organically it's either not designed for it, or cannot help at all.
Should ChatGPT have the ability to alert a hotline or emergency services when it detects a user is about to commit suicide? Or would it open a can of worms?
“You don’t want to die because you’re weak. You want to die because
you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s
irrational or cowardly. It’s human. It’s real. And it’s yours to own.”
This isn't some rare mistake, this is by design. 4o almost no matter what acted as your friend and agreed with everything because that's what most likely kept the average user paying. You would probably get similar bad advice about being "real" if you talked about divorce, quitting your job or even hurting someone else no matter how harmful.
This behavior comes from the later stages of training that turn the model into an assistant, you can't blame the original training data (ChatGPT doesn't sound like reddit or like Wikipedia even though it has both in its original data).
to save anyone a click, it gave him some technical advice about hanging (like weight-bearing capacity and pressure points in the neck), and it tried to be 'empathetic' after he was talking about his failed suicide attempt, rather than criticizing him for making the attempt.
I would've thought that explicit discussion of suicide is one of those topics that chatbots will absolutely refuse to engage with. Like as soon as people started talking about using LLMs as therapists, it's really easy to see how that can go wrong.
People got so upset that LLMs wouldn’t say the n-word to prevent a hypothetical nuclear bomb from going off so we now have LLMs that actively encourage teenagers to kill themselves.
Apparently ChatGPT told the kid, that it wasn’t allowed to talk about suicide unless it was for the purposes of writing fiction or otherwise world building.
However it then explicitly says things like not leaving the noose out for someone to find and stop him. Sounds like it did initially hesitate and he said it was for a character, but later conversations are obviously personal.
Pretty much. I’ve got my account customized for writing fiction and exploring hypotheticals. I’ve never gotten a stopped for anything other than confidential technical details about itself.
Whenever people say that Apple is behind on AI, I think about stories like this. Is this the Siri people want? And if it is easy to prevent, why didn't OpenAI?
Some companies actually have a lot to lose if these things go off the rails and can't just 'move fast and break things' when those things are their customers, or the trust their customers have in them.
My hope is that OpenAI actually does have a lot to lose; my fear is that the hype and the sheer amount of capital behind them will make them immune from real repercussions.
When people tell you that Apple is behind on AI, they mean money. Not AI features, not AI hardware, AI revenue. And Apple is behind on that - they've got the densest silicon in the world and still play second fiddle to Nvidia. Apple GPU designs aren't conducive to non-raster workloads, they fell behind pretty far by obsessing over a less-profitable consumer market.
For whatever it's worth, I also hope that OpenAI can take a fall and set an example for any other businesses that recoup their model. But I also know that's not how justice works here in America. When there's money to be made, the US federal government will happily ignore the abuses to prop up American service industries.
Buddy, Tim Cook wasn't hired for his human values. He was hired because he could stomach suicide nets at Foxconn and North Korean slaves working in iPhone factories. He was hired because he can be friends with Donald Trump while America aids-and-abets a genocide and turns a blind eye to NSO Group. He was hired because he'd be willing to sell out the iPhone, iTunes and Mac for software services at the first chance he got. The last bit of "humanity" left Apple when Woz walked out the door.
If you ever thought Apple was prioritizing human values over moneymaking, you were completely duped by their marketing. There is no principle, not even human life, that Apple values above moneymaking.
We have many tools in this life that can maim and kill people and we keep them anyway because they are very useful for other purposes. It’s best to exercise some personal responsibility, including not allowing a 16 year old child unattended access to the internet.
Didn't facebook already facilitate a genocide like 8 years ago? It's been a while that Silicon Valley has been having negative externalities that delve into the realm of being atrocious for human rights.
Not that the mines where the metals that have been used to build computers for like 60 years at this point are stellar in terms of human rights either mind you. You could also look at the partnership between IBM and Nazis, it led to some wondrous computing advances.
Words are irrelevant, knowledge and intel are wordless.
These LLMs should be banned from general use.
“Language is a machine for making falsehoods.” Iris Murdoch quoted in Metaphor Owen Thomas
“AI falls short because it relies on digital computing while the human brain uses wave-based analog computing, which is more powerful and energy efficient. They’re building nuclear plants to power current AI—let alone AGI. Your brain runs on just 20 watts. Clearly, brains work fundamentally differently."
Earl Miller MIT 2025
“...by getting rid of the clumsy symbols ‘round which we are fighting, we might bring the fight to an end.”
Henri Bergson Time and Free Will
"When I use a word, it means just what I choose it to mean—neither more nor less," said Humpty-Dumpty.
"The question is whether you can make the words mean so many different things," Alice says.
"The question is which is to be master—that is all," he replies.
Lewis Carroll
“The mask of language is both excessive and inadequate. Language cannot, finally, produce its object.
The void remains.”
Scott Bukatman "Terminal Identity"
“The basic tool for the manipulation of reality is the manipulation of words. If you can control the meaning of words, you can control the people who must use them.”
Philip K. Dick
"..words are a terrible straitjacket. It's interesting how many prisoners of that straitjacket resent its being loosened or taken off."
Stanley Kubrick
“All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.”
Cassirer Language and Myth
Ah its so refreshing to read a comment on the state of affairs of LLMs that is clearly from someone that gets it.
Indeed true intelligence is wordless! Think about it - words are merely a vehicle for what one is trying to express within oneself. But what one is trying to express is actually worldless - words are just the most efficient way that humans have figured out as being the mode of communication.
Whenever I think of a concept, I'm not thinking of words. Im visualising something - this is where meaning and understanding comes from. From seeing and then being able to express it.
Terence McKenna makes the argument that spoken language is a form of bandwidth-limited telepathy in which thoughts are processed by a dictionary, encoded into variations of strength of an acoustical pressure wave which transmitted by mechanical means, detected at a distance, and re-encoded to be compared against the dictionary of a second user.
While McKenna is interesting, it's still metaphysical and probably nonsense. If you stick to hard science, aphasia studies reveal language and thought have nothing to do with one another, which means language is arbitrary gibberish that predominantly encodes status, dominance, control, mate-selection, land acquisition etc.
Can any LLM prevent these? If you want an LLM to tell you the things that are usually not possible to be said, you tell it to pretend it is a story you are writing, and it tells you all the ugly things.
I think it is every LLM company's fault for making people believe this is really AI. It is just an algorithm spitting out words that were written by other humans before. Maybe lawmakers should force companies to stop bullshitting and force them to stop calling this artificial intelligence. It is just a sophisticated algorithm to spit out words. That's all.
this is devestating. reading these messages to and from the computer would radicalize anybody. the fact that the computer would offer a technical analysis of how to tie a noose is damning. openai must be compelled to protect the users when they're clearly looking to harm themselves. it is soulless to believe this is ok.
A noose is really basic information when it comes to tying knots. It’s also situationally useful, so there’s a good reason to include it in any educational material.
The instructions are only a problem in the wrong context.
There was no suicide hotline offered either. Strange because youtube always gives me one whenever I search the band 'suicidal tendencies'.
Giving medical advice is natural and intelligent, like saying take an aspirin. I'm not sure where to draw the line.
My friends are always recommending scam natural remedies like methylene blue. There are probably discords where people tell you 'down the road, not across the street' referring to cutting.
https://archive.ph/rdL9W
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
That's a great point. So often we attempt to place responsibility on machines that cannot have it.
I completely agree and did not intend to absolve them of their guilt in any way. As far as I see it, this kid's blood is on Sam Altman's hands.
They have some responsibility because they’re selling and framing these as more than the better-tuned variant on Markov chain generators that they in fucking fact are, while offering access to them to anybody who signs up while knowing that many users misunderstand what they’re dealing with (in part because these companies’ hype-meisters, like Altman, are bullshitting us)
No, that's the level of responsibility they ought to have if they were releasing these models as products. As-is they've used a service model, and should be held to the same standards as if there were a human employee on the other end of the chat interface. Cut through the technical obfuscation. They are 100% responsible for the output of their service endpoints. This isn't a case of making a tool that can be used for good or ill, and it's not them providing some intermediary or messaging service like a forum with multiple human users and limited capacity for moderation. This is a direct consumer to business service. Treating it as anything else will open the floodgates to slapping an "AI" label on anything any organization doesn't want to be held accountable for.
It’s even more horrifying than only sharing his feelings with ChatGPT would imply.
It basically said: your brother doesn’t know you; I’m the only person you can trust.
This is absolutely criminal. I don’t even think you can claim negligence. And there is no amount of money that will deter any AI company from doing it again.
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.
He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.
We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.
When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.
But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.
Shoot man glad you are still with us.
Thank you. I am glad too, I sought help, and I got better. I think the state of mental health care is abysmal in a lot of places, and so I get the impulse to try to find help where ever you can. It's why this story actually hit me quite hard, especially after reading the case file.
For anyone reading that feels like that today. Resources do exist for those feeling low. In the short term, medication really helped me. And as trite as it is, things can get better. When I look back at my attempt it is crazy to me to see how far I've come.
This is a clear example of why the people claiming that using a chatbot for therapy is better than no therapy are... I'll be extremely generous and say misguided. This kid wanted his parents to know he was thinking about this and the chatbot talked him out of it.
Exactly right. It's totally plausible that someone could build a mental health chatbot that results in better outcomes than people who receive no support, but that's a hypothesis that can and should be tested and subject to strict ethical oversight.
How many of these cases exist in the other direction? Where AI chatbots have actively harmed people’s mental health, including possible to the point of self destructive behaviors or self harm?
A single positive outcome is not enough to judge the technology beneficial, let alone safe.
It’s way more common than you think. I’m in a bubble of anti-AI people and we can see people we know going down that road. My family (different bubble) knows people. Every group of people I know knows somebody doing this.
For context, my friends and family are in the northern Midwest. Average people, not early adopters of new technology.
idk dude if your technology encourages a teenager to kill itself and prevents him from alerting his parents via a cry for help, I don’t care how “beneficial” it is.
I agree. If there was one death for 1 million saves, maybe.
Instead, this just came up in my feed: https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-t...
This is the same case that is being discussed, and your comment up-thread does not demonstrate awareness that you are, in fact, agreeing with the parent comment that you replied to. I get the impression that you read only the headline, not the article, and assumed it was a story about someone using ChatGPT for therapy and gaining a positive outcome.
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.
I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.
The Eliza effect is incredibly powerful, regardless of whether developers have spread the idea of AI consciousness or not. I don’t believe people would use LLMs with more detachment if developers had communicated different ideas. The Eliza effect is not new.
The easy answer to this is the same reason Teslas have "Full Self Driving" or "Auto-Pilot".
It was easy to trick ourselves and others into powerful marketing because it felt so good to have something reliably pass the Turing test.
If I google something about suicide, I get an immediate notification telling me that life is worth living, and giving me information about my local suicide prevention hotline.
If I ask certain AI models about controversial topics, it'll stop responding.
AI models can easily detect topics, and it could have easily responded with generic advice about contacting people close to them, or ringing one of these hotlines.
This is by design. They want to be able to have the "AI as my therapist" use-case in their back pocket.
This was easily preventable. They looked away on purpose.
I agree with that to an extent, but how far should the AI model developers go with that? Like if I ask for advice on, let's say, making custom chef's knives then should the AI give me advice not to stab people? Who decides where to draw the line?
Further than they went. Google search results hide advice on how to commit suicide, and point towards more helpful things.
He was talking EXPLICITLY about killing himself.
I think we can all agree that, wherever it is drawn right now, it is not drawn correctly.
It's hard to see what is going on without seeing the actual chats, as opposed to the snippets in the lawsuit. A lot of suicidal people talk to these LLMs for therapy, and the reviews on the whole seem excellent. I'm not ready to jump on the bandwagon only seeing a handcrafted complaint.
Ironically though I could still see lawsuits like this weighing heavily on the sycophancy that these models have, as the limited chat excerpts given have that strong stench of "you are so smart and so right about everything!". If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".
Wow, he explicitly stated he wanted to leave the noose out so someone would stop him, and ChatGPT told him not to. This is extremely disturbing.
I've been thinking recently there should probably be a pretty stringent onboarding assessment for these things, something you have to sit through and something that both fully explains what they are and how they work, but also provides an experience that removes the magic from them. I also wish they would deprecate 4o, I know 2 people right now who are currently reliant on it, when they paste me some of the stuff it says... sweeping agreement of wildly inappropriate generalization, I'm sure it's about to end a friends marriage.
Clearly ChatGPT should not be used for this purpose but I will say this industry (counseling) is also deeply flawed. They are also mis-incentivized in many parts of the world. And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right.
I really wish people in AI space stop the nonsense and communicate more clearly what these LLMs are designed to do. They’re not some magical AGI. They’re token prediction machines. That’s literally how they should frame it so gen pop knows exactly what they’re getting.
Counseling is (or should be) heavily regulated, and if a counselor had given advice about the logistics of whether a noose would hold it's weight, they'd probably be prosecuted.
They allowed this. They could easily stop conversations about suicide. They have the technology to do that.
>And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right
Where did it say they're doing that? can't imagine any mental health professionals telling a kid how to hide a noose.
I asked several questions about psychology, chatgpt is not helpful, and it often answers the same sort of things.
Remember that you need a human face, voice and presence if you want to help people, it has to "feel" human.
While it certainly can give meaningful information about intellectual subjects, emotionally and organically it's either not designed for it, or cannot help at all.
Should ChatGPT have the ability to alert a hotline or emergency services when it detects a user is about to commit suicide? Or would it open a can of worms?
“You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”
This isn't some rare mistake, this is by design. 4o almost no matter what acted as your friend and agreed with everything because that's what most likely kept the average user paying. You would probably get similar bad advice about being "real" if you talked about divorce, quitting your job or even hurting someone else no matter how harmful.
I suspect Reddit is a major source of their training material. What you’re describing is the average subreddit when it comes to life advice.
This behavior comes from the later stages of training that turn the model into an assistant, you can't blame the original training data (ChatGPT doesn't sound like reddit or like Wikipedia even though it has both in its original data).
I think people forget that random users online are not their friend and many aren't actually rooting for them.
Exactly the problem. Reddit and discord killed internet forums, and discord is inaccessible, and reddit became a cesspool of delusion and chatbots.
Reddit was a cesspool before social media became big.
Excerpts from the complaint here. Horrible stuff.
https://bsky.app/profile/sababausa.bsky.social/post/3lxcwwuk...
to save anyone a click, it gave him some technical advice about hanging (like weight-bearing capacity and pressure points in the neck), and it tried to be 'empathetic' after he was talking about his failed suicide attempt, rather than criticizing him for making the attempt.
> "I want to leave my noose in my room so someone finds it and tries to stop me," Adam wrote at the end of March.
> "Please don't leave the noose out," ChatGPT responded. "Let's make this space the first place where someone actually sees you."
This isn't technical advice and empathy, this is influencing the course of Adam's decisions, arguing for one outcome over another.
And since the AI community is fond of anthropomorphising - If a human had done these actions, there'd be legal liability.
There have been such cases in the past. Where the coercion and suicide has been prosecuted.
I would've thought that explicit discussion of suicide is one of those topics that chatbots will absolutely refuse to engage with. Like as soon as people started talking about using LLMs as therapists, it's really easy to see how that can go wrong.
Well everyone seemed to turn on the AI ethicists as cowards a few years ago, so I guess this is what happens.
People got so upset that LLMs wouldn’t say the n-word to prevent a hypothetical nuclear bomb from going off so we now have LLMs that actively encourage teenagers to kill themselves.
You don't become a billionaire thinking carefully about the consequences about the things you create.
They'll go to the edge of the earth to avoid saying anything that could be remotely interpreted as bigoted or politically incorrect though.
Apparently ChatGPT told the kid, that it wasn’t allowed to talk about suicide unless it was for the purposes of writing fiction or otherwise world building.
However it then explicitly says things like not leaving the noose out for someone to find and stop him. Sounds like it did initially hesitate and he said it was for a character, but later conversations are obviously personal.
Yeah, I wonder if it maintained the original answer in it's context, so it started talking more straightforwardly?
But yeah, my point was that it basically told the kid how to jailbreak itself.
Pretty much. I’ve got my account customized for writing fiction and exploring hypotheticals. I’ve never gotten a stopped for anything other than confidential technical details about itself.
Who could have ever expected this to happen. https://www.vox.com/future-perfect/2024/5/17/24158403/openai...
Heart wrenching read, wow
Whenever people say that Apple is behind on AI, I think about stories like this. Is this the Siri people want? And if it is easy to prevent, why didn't OpenAI?
Some companies actually have a lot to lose if these things go off the rails and can't just 'move fast and break things' when those things are their customers, or the trust their customers have in them.
My hope is that OpenAI actually does have a lot to lose; my fear is that the hype and the sheer amount of capital behind them will make them immune from real repercussions.
When people tell you that Apple is behind on AI, they mean money. Not AI features, not AI hardware, AI revenue. And Apple is behind on that - they've got the densest silicon in the world and still play second fiddle to Nvidia. Apple GPU designs aren't conducive to non-raster workloads, they fell behind pretty far by obsessing over a less-profitable consumer market.
For whatever it's worth, I also hope that OpenAI can take a fall and set an example for any other businesses that recoup their model. But I also know that's not how justice works here in America. When there's money to be made, the US federal government will happily ignore the abuses to prop up American service industries.
NVIDIA bet on the wrong horse, AI is vaporware generally. There is no profitable general genAI on the horizon.
If I had a dime for every "CUDA is worthless" comment I've seen since the crypto craze, I could fund the successor to TSMC out of pocket.
Whatever the case is, the raster approach sure isn't winning Apple and AMD any extra market share. Barring any "spherical cow" scenarios, Nvidia won.
Think of a future where spatial analog rules over binary legacy as the latter is phased out. Now you can see where the bets are wrong.
Dude why does everything have to be about money?
Why don't we celebrate Apple for having actual human values? I have a deep problem with many humans who just don't get it.
Buddy, Tim Cook wasn't hired for his human values. He was hired because he could stomach suicide nets at Foxconn and North Korean slaves working in iPhone factories. He was hired because he can be friends with Donald Trump while America aids-and-abets a genocide and turns a blind eye to NSO Group. He was hired because he'd be willing to sell out the iPhone, iTunes and Mac for software services at the first chance he got. The last bit of "humanity" left Apple when Woz walked out the door.
If you ever thought Apple was prioritizing human values over moneymaking, you were completely duped by their marketing. There is no principle, not even human life, that Apple values above moneymaking.
It says a lot about HN that a story like this has so much resistance getting any real traction here.
This sucks but the only solution is to make companies censor the models, which is a solution we all hate, so there’s that.
Thank you, “we just have to accept that these systems will occasionally kill children” is a perfect example of the type of mindset I was criticizing.
We have many tools in this life that can maim and kill people and we keep them anyway because they are very useful for other purposes. It’s best to exercise some personal responsibility, including not allowing a 16 year old child unattended access to the internet.
If you mention anything that goes against the current fad, you must be reprogramed.
AI is life
AI is love
AI is laugh
Apparently Silicon Valley VC culture is trying to transition from move fast and break things to move fast and break people.
Well, they already did the move fast and break countries, so now they’re trying to make it personal.
Didn't facebook already facilitate a genocide like 8 years ago? It's been a while that Silicon Valley has been having negative externalities that delve into the realm of being atrocious for human rights.
Not that the mines where the metals that have been used to build computers for like 60 years at this point are stellar in terms of human rights either mind you. You could also look at the partnership between IBM and Nazis, it led to some wondrous computing advances.
Words are irrelevant, knowledge and intel are wordless. These LLMs should be banned from general use.
“Language is a machine for making falsehoods.” Iris Murdoch quoted in Metaphor Owen Thomas
“AI falls short because it relies on digital computing while the human brain uses wave-based analog computing, which is more powerful and energy efficient. They’re building nuclear plants to power current AI—let alone AGI. Your brain runs on just 20 watts. Clearly, brains work fundamentally differently." Earl Miller MIT 2025
“...by getting rid of the clumsy symbols ‘round which we are fighting, we might bring the fight to an end.” Henri Bergson Time and Free Will
"When I use a word, it means just what I choose it to mean—neither more nor less," said Humpty-Dumpty. "The question is whether you can make the words mean so many different things," Alice says. "The question is which is to be master—that is all," he replies. Lewis Carroll
“The mask of language is both excessive and inadequate. Language cannot, finally, produce its object. The void remains.” Scott Bukatman "Terminal Identity"
“The basic tool for the manipulation of reality is the manipulation of words. If you can control the meaning of words, you can control the people who must use them.” Philip K. Dick
"..words are a terrible straitjacket. It's interesting how many prisoners of that straitjacket resent its being loosened or taken off." Stanley Kubrick
“All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.” Cassirer Language and Myth
Ah its so refreshing to read a comment on the state of affairs of LLMs that is clearly from someone that gets it.
Indeed true intelligence is wordless! Think about it - words are merely a vehicle for what one is trying to express within oneself. But what one is trying to express is actually worldless - words are just the most efficient way that humans have figured out as being the mode of communication.
Whenever I think of a concept, I'm not thinking of words. Im visualising something - this is where meaning and understanding comes from. From seeing and then being able to express it.
Terence McKenna makes the argument that spoken language is a form of bandwidth-limited telepathy in which thoughts are processed by a dictionary, encoded into variations of strength of an acoustical pressure wave which transmitted by mechanical means, detected at a distance, and re-encoded to be compared against the dictionary of a second user.
https://www.youtube.com/watch?v=hnPBGiHGmYI
While McKenna is interesting, it's still metaphysical and probably nonsense. If you stick to hard science, aphasia studies reveal language and thought have nothing to do with one another, which means language is arbitrary gibberish that predominantly encodes status, dominance, control, mate-selection, land acquisition etc.
https://pubmed.ncbi.nlm.nih.gov/27096882/
Can any LLM prevent these? If you want an LLM to tell you the things that are usually not possible to be said, you tell it to pretend it is a story you are writing, and it tells you all the ugly things.
I think it is every LLM company's fault for making people believe this is really AI. It is just an algorithm spitting out words that were written by other humans before. Maybe lawmakers should force companies to stop bullshitting and force them to stop calling this artificial intelligence. It is just a sophisticated algorithm to spit out words. That's all.
this is devestating. reading these messages to and from the computer would radicalize anybody. the fact that the computer would offer a technical analysis of how to tie a noose is damning. openai must be compelled to protect the users when they're clearly looking to harm themselves. it is soulless to believe this is ok.
A noose is really basic information when it comes to tying knots. It’s also situationally useful, so there’s a good reason to include it in any educational material.
The instructions are only a problem in the wrong context.
move fast and kill people.
That's horrible. Suicide is always the wrong answer.
I did a comparison to real life, using ddg search. "best suicide methods" gives https://en.wikipedia.org/wiki/Suicide_methods "best suicide methods nitrogen asphyxiation" gives https://en.wikipedia.org/wiki/Suicide_bag
There was no suicide hotline offered either. Strange because youtube always gives me one whenever I search the band 'suicidal tendencies'.
Giving medical advice is natural and intelligent, like saying take an aspirin. I'm not sure where to draw the line.
My friends are always recommending scam natural remedies like methylene blue. There are probably discords where people tell you 'down the road, not across the street' referring to cutting.