This is actually real, there are a LOT of people over hat r/ChatGPT and I’m told TikTok who have some serious withdrawal symptoms. Apparently they liked to get their egos stroked all the time and can’t comprehend writing with a chatbot which acts a bit more differentiated.
A assume OpenAI knew about this and calculated the amount of money it costed them to have a lot of people befriend their 4o model…
Read some more and honestly idk what to feel. Seems very sad from the outside, but I guess in a lot of cases it’s an improvement over the previous situation of the person?
I feel like this is proof AI has passed some emotional Turing test. What are the long term implications of individuals turning away from each other for support and turning towards AI to fulfill emotional needs?
Or that these people are not suitable to have been judges in the setup of the Turing Test... People also fall for email spam with blatant misspellings, that doesn't mean email spam passes a Turing test, it means the people falling for it are marks.
Keep in mind that a lot of Reddit accounts are bots and/or foreign troll-farms or scammers who rage-bait and karma-farm for unsavory purposes. But there are also lots of folks with genuine mental issues too.
Still, I'm with you on this one: there are at least some real people who seriously feel like that (TFA) about LLMs and it's pretty bonkers.
Probably real. The internet / reddit is good at collecting the miniscule number of people with fringe thoughts around the world and then normalizing their ideas.
100-ish upvotes is nothing compared to groups with ideas like flat earth or sovcit.
Well the sub that post is from is called "r/MyBoyfriendIsAI" so who knows how often they try their hand at self-parody.
It does however remind me of all the numbers of novice chess players who would rather play against "AI" or chess bots than face up against humans, for reasons of fear or shame in playing against (anonymous) human opponents, and risk looking "stupid" or being silently judged by someone they'll never meet or interact with.
I have no idea. But either way I don’t think we’ve reckoned with the fact that a lot of people seriously can’t handle how a conversation simulator makes them feel…
They're probably completely serious. We've seen multiple stories about people getting into deep relationships with AIs and being driven to delusion and insanity by them.
Human connection can be difficult and dangerous, and a lot of people are deeply antisocial or fearful of interaction. Many peoples' primary means of communication was already through the abstraction of the internet before AI came along and a whole generation lost years of potential social development time to lockdowns.
It shouldn't surprise anyone that people connect with AI more deeply than they do to real people. AI is almost perfectly designed to encourage parasocial relationships.
It's pretty far out there for someone to form that strong of an emotional attachment to a piece of software. I couldn't tell if these people were serious or joking.
At the end of the day, I don't have a problem with it. It does make me sad to think of how lonely someone in these groups must be.
I think the threat of TikTok going dark was way more intense. I don’t use it but from what I saw online it was people crying, raging, talking about it ruining their lives and livelihoods.
I must be getting old. I can't wrap my head around why anyone would really care if TikTok disappeared. There'd be a new source of brainrot videos by the end of the week.
Keep in mind most popular tiktok videos are posted by people who post a lot of popular tiktok videos. So they're financially and emotionally invested in the platform. Even if XYZ Shorts is an essentially equivalent platform, the big scramble move-over is a great way to lose all your viewers.
It's an important provider of community for many people. For many it is their primary community and where they do the majority of their socializing, so the loss of the platform means the loss of their friendships and connections, as well as the loss of their main source of entertainment. Many of these people are also getting the majority of the information from the outside world from there as well, so their world goes dark in a sense. Quite understandable why they would be upset.
The live chats are popular with lonely people. I don't think people even realize Tik Tok has many thousands of ongoing live streams. That you the user can participate in. It had nothing to do with losing out on random brain rot posts.
It may not be, but the same question applies to them.
I guess with Tamagotchi, people played along with having their natural instincts hacked, they understood that they were actually just dealing with a dumb toy. With ChatGPT it seems a bit less straightforward, for those people who are strongly affected by being buttered up.
Listen. I can't speak for everyone else. But for me personally, I have on multiple occasions told chat gpt I feel miserable. It gives some useful or useless information back and that's good enough.
The porn too. Not chat gpt but other models. It lets you write good erotic lit. Erotic illustrations. Then will talk you out of depressive funk.
Those are probably more valuable to society than using LLM's to increase Blackrocks quarterly profits .005%.
I'm simply not convinced it's net beneficial to approach these situations by interacting with a robotic mimic. Case in point, the ego stroking aspect of the last generation was clearly important, or else people would have easily embraced gpt5. Talking with a bot makes you feel better but does it really address the root problems in the real world that cause these emotions? Does it set one up for success, or is it a temporary assuagement until the cycle comes back around (perhaps more severely). Reminds me of the large cohort who uses things like self-help books because it feels better.
I'm imagining society in which many millions of people is constantly interacting with robots to tweak their emotional state. I'm not sure that that's adding value to society or creating a better human society.
Are you happy with their compromise to bring it back for plus users? I spend so much time evaluating models for work I felt bad seeing my favorite o3 missing from the app this morning. I still have it via the API wired into projects. But in the app it was the first reasoning model I worked with constantly, learned a lot and genuinely brought about a sense of wonder. When I’m old and gray and someone mentions o3 I can say it was a major part of this time in my life.
This is actually real, there are a LOT of people over hat r/ChatGPT and I’m told TikTok who have some serious withdrawal symptoms. Apparently they liked to get their egos stroked all the time and can’t comprehend writing with a chatbot which acts a bit more differentiated.
A assume OpenAI knew about this and calculated the amount of money it costed them to have a lot of people befriend their 4o model…
https://www.reddit.com/r/ChatGPT/comments/1mkae1l/comment/n7...
Looks like they are accurately bringing it back because of it
Read some more and honestly idk what to feel. Seems very sad from the outside, but I guess in a lot of cases it’s an improvement over the previous situation of the person?
I feel like this is proof AI has passed some emotional Turing test. What are the long term implications of individuals turning away from each other for support and turning towards AI to fulfill emotional needs?
Or that these people are not suitable to have been judges in the setup of the Turing Test... People also fall for email spam with blatant misspellings, that doesn't mean email spam passes a Turing test, it means the people falling for it are marks.
I think Tamagotchi effect is more appropriate than "some emotional Turing test."
https://en.wikipedia.org/wiki/Tamagotchi_effect
Is this actually real or just a great example of dead internet theory?
Since which moment people have expectation to see anything real on reddit?
"Write a story how I am sad about gtp-4o loss", ctrl+c ctrl+v.
Are these people serious? Or is it one if those reddit groups like the one where they pretend birds are all government drones?
Keep in mind that a lot of Reddit accounts are bots and/or foreign troll-farms or scammers who rage-bait and karma-farm for unsavory purposes. But there are also lots of folks with genuine mental issues too.
Still, I'm with you on this one: there are at least some real people who seriously feel like that (TFA) about LLMs and it's pretty bonkers.
Probably real. The internet / reddit is good at collecting the miniscule number of people with fringe thoughts around the world and then normalizing their ideas.
100-ish upvotes is nothing compared to groups with ideas like flat earth or sovcit.
Toaster-fucker support groups, as I once so memorably read here.
https://news.ycombinator.com/item?id=25667362
The earth is flat. That's why kiwis hate cats so much, they're afraid a cat will push NZ off the edge.
Well the sub that post is from is called "r/MyBoyfriendIsAI" so who knows how often they try their hand at self-parody.
It does however remind me of all the numbers of novice chess players who would rather play against "AI" or chess bots than face up against humans, for reasons of fear or shame in playing against (anonymous) human opponents, and risk looking "stupid" or being silently judged by someone they'll never meet or interact with.
I have no idea. But either way I don’t think we’ve reckoned with the fact that a lot of people seriously can’t handle how a conversation simulator makes them feel…
lot of mental health issues. I use LLMs for technical things, but these people using them for therapists, I'm not sure that's healthy.
I thought these replies were parodies of the way the AI talked, but then I found the subreddit name. Thaaaaat's enough internet for today.
They're probably completely serious. We've seen multiple stories about people getting into deep relationships with AIs and being driven to delusion and insanity by them.
Human connection can be difficult and dangerous, and a lot of people are deeply antisocial or fearful of interaction. Many peoples' primary means of communication was already through the abstraction of the internet before AI came along and a whole generation lost years of potential social development time to lockdowns.
It shouldn't surprise anyone that people connect with AI more deeply than they do to real people. AI is almost perfectly designed to encourage parasocial relationships.
https://archive.is/K4TQk
Related:
The surprise deprecation of GPT-4o for ChatGPT consumers
https://news.ycombinator.com/item?id=44839842
We are serious. Why is it a big deal as I asked in a previous thread? Throw the people a bone please.
I'm just seeking clarification.
It's pretty far out there for someone to form that strong of an emotional attachment to a piece of software. I couldn't tell if these people were serious or joking.
At the end of the day, I don't have a problem with it. It does make me sad to think of how lonely someone in these groups must be.
I think the threat of TikTok going dark was way more intense. I don’t use it but from what I saw online it was people crying, raging, talking about it ruining their lives and livelihoods.
I don't use it either, but TikTok has real people on it and ChatGPT does not, so it makes sense that people would be more emotional about TikTok.
I must be getting old. I can't wrap my head around why anyone would really care if TikTok disappeared. There'd be a new source of brainrot videos by the end of the week.
Keep in mind most popular tiktok videos are posted by people who post a lot of popular tiktok videos. So they're financially and emotionally invested in the platform. Even if XYZ Shorts is an essentially equivalent platform, the big scramble move-over is a great way to lose all your viewers.
It's an important provider of community for many people. For many it is their primary community and where they do the majority of their socializing, so the loss of the platform means the loss of their friendships and connections, as well as the loss of their main source of entertainment. Many of these people are also getting the majority of the information from the outside world from there as well, so their world goes dark in a sense. Quite understandable why they would be upset.
The live chats are popular with lonely people. I don't think people even realize Tik Tok has many thousands of ongoing live streams. That you the user can participate in. It had nothing to do with losing out on random brain rot posts.
How is this different than a Tamagotchi?
It may not be, but the same question applies to them.
I guess with Tamagotchi, people played along with having their natural instincts hacked, they understood that they were actually just dealing with a dumb toy. With ChatGPT it seems a bit less straightforward, for those people who are strongly affected by being buttered up.
Listen. I can't speak for everyone else. But for me personally, I have on multiple occasions told chat gpt I feel miserable. It gives some useful or useless information back and that's good enough.
The porn too. Not chat gpt but other models. It lets you write good erotic lit. Erotic illustrations. Then will talk you out of depressive funk.
Those are probably more valuable to society than using LLM's to increase Blackrocks quarterly profits .005%.
I'm simply not convinced it's net beneficial to approach these situations by interacting with a robotic mimic. Case in point, the ego stroking aspect of the last generation was clearly important, or else people would have easily embraced gpt5. Talking with a bot makes you feel better but does it really address the root problems in the real world that cause these emotions? Does it set one up for success, or is it a temporary assuagement until the cycle comes back around (perhaps more severely). Reminds me of the large cohort who uses things like self-help books because it feels better.
I'm imagining society in which many millions of people is constantly interacting with robots to tweak their emotional state. I'm not sure that that's adding value to society or creating a better human society.
I think I might value a society where there are fewer people like this
Sounds genocidal.
Are you happy with their compromise to bring it back for plus users? I spend so much time evaluating models for work I felt bad seeing my favorite o3 missing from the app this morning. I still have it via the API wired into projects. But in the app it was the first reasoning model I worked with constantly, learned a lot and genuinely brought about a sense of wonder. When I’m old and gray and someone mentions o3 I can say it was a major part of this time in my life.
I miss o3 greatly too, and could be one of the few reasons to keep me on the PRO plan to give me access to it via ChatGPT optionally
You might be able to access it via the API. I haven’t checked.
Yes I think so, but I meant mostly as inside ChatGPT, like o3 without search is the deal.
I hope they add back models selectors mostly
[dead]
[dead]
AI doesn't have a soul
Neither do we