- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
People don’t often realize how subtle changes in language can change our thought process. It’s just how human brains work sometimes.
The old bit about smoking and praying is a great example. If you ask a priest if it’s alright to smoke when you pray, they’re likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it’s alright to pray while you’re smoking, they’d probably say yes, as you should feel free to pray to God whenever you need…
Now, make a machine that’s designed to be agreeable, relatable, and make persuasive arguments but that can’t separate fact from fiction, can’t reason, has no way of intuiting it’s user’s mental state beyond checking for certain language parameters, and can’t know if the user is actually following it’s suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible…
You get one answer that leads you a set direction, then another, then another… It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn’t a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.
Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.
Then make the machine try to keep people talking for as long as possible…
That’s probably a huge part of it. How many billions of dollars have been spent engineering content on a screen to get its tendrils into people’s minds and attention and not let go?
EnGaGeMent!!!
This is really well written. Great post.
How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don’t understand how this keeps happening.
Highly recommend Eddy Burbacks Video about the topic
This could happen to anyone including people without having mental issues, simply by having long conversations with AI.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
So it sounds like he was in fact not ‘great’
deleted by creator
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

WHATGenuine question, REALLY: What in the fuck is an otherwise “functioning adult” doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?
This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
“abuse the ai’s emotions” isn’t a thing. Full stop.
This just reiterates OPs point that naive or moronic adults will believe what they want to believe.
AI psychosis is a thing:
cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals
It’s not very studied since it’s relatively new.
I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.
If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I’m going to sue that someone who took advantage of my son’s fuckwittedness
I feel like his father should also slap himself unconscious for raising a fuckwit?
So, a chatbot grooms somebody into killing himself, and your response is… Blame his father?
The father is suing the company who makes the wrong answer machine for the wrong answer machine spiraling his son to madness, but never protected his son from spiraling into madness by teaching critical thinking.
Look I don’t like it but to think Gemini (wrong answer machine) is completely to blame would be madness.
Uh-huh. Do you have any evidence to back up your beliefs here, or are we just working from the presumption that the parents are always to blame
Did we read the same article? Because I feel like we did not read the same article.
I don’t think this person was a “fuckwit”. AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.
Chat GPT was super affirming about a job I recently applied to… I did not get the job. That was my first experience with it affirming something that was personally important. And so I can absolutely see how this would affect someone in other ways.
It’s cool, we can agree to disagree, because I 100% think that he was a textbook fuckwit.
I would like to see the full transcript.
How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.
Context would help a lot. Maybe it will come out in discovery.
That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.
This could happen to anyone including people without having mental issues, simply by having long conversations with AI.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
This was a different case. That doesn’t answer my question.
To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?
It is so hard to believe people are this stupid, but then again, looking at most people I guess it isn’t that shocking.
I was thinking the same thing, like what is the flow of the chat to get it to this point?
I am also curious how the father saw the Gemini chats. Was it still on the screen days later? I am trying to imagine how that would work, my computer would lock and that would be that. Do kids give their parents passwords and their screen unlock codes?
I don’t lock my personal computer. It’s my husband & me at home, and he’s fine to use my device (even though he normally wouldn’t).
ChatGPT for sure saves conversations.
Yeah it definitely does save conversations. Perhaps he did leave it unlocked. I do find that strange though, particularly if one was getting increasingly paranoid.
What would Marx do?
Reality is really difficult for some people…
Truly, I don’t understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it’s all of the above and more.
I know a guy who routinely says, “I asked ChatGPT…”, and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It’s a total refusal to believe otherwise, but I can’t fathom why.
especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world and from there you can get brought to believe all manner of crazy shit.
Son of Sam killed people because his dog told him to. Should they have sued Purina?
America never lets a tragedy go to waste without trying to cash in.
the dog didn’t actually tell him to
Google actually told him to with text receipts in writing
I mean, if Purina had sending him letters telling him to murder people like Google here, then yeah
I mean, heaven forbid we should hold corporations like Google responsible for their actions.
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.
Just remember that these language models are also advising governments and military units.
Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
A forever war is David Bowie to the ears of the MIC. Infinite money glitch.
I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
Same reason I keep money in a savings account even though it accrues interest
Al mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.
In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.
AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic’s Claude will tell you what you want to hear?
Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.
While I despise everything AI, you cannot sue because your kid is stupid.
deleted by creator
This could happen to anyone including to people with no mental issues.
Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
Strongly disagree No one of sound mind is going to be coerced by Ai to do jack shit.
https://en.wikipedia.org/wiki/Liebeck_v._McDonald's_Restaurants
you should read that.
You should read it, actually. Coffee should not be hot enough that you need skin grafts if you spill it on yourself.
that was my point.
most people hear the story and go, “ofc the hot coffee is fucking hot. what a fucking idiot.” but they don’t realize that she needed skin grafts on her inner thighs and vagina because the coffee was so hot it literally melted her skin off. they only know the case because McDonald’s ran a smear campaign against the victim and slandered her as an “idiot”. they only did that because their coffee machine was faulty and heated the drink up to near boiling temperatures. worst part is, they almost got away with it!
how’s that phrase go? Regulations are written in blood.
LLMs need to have regulations on what and who can interact with it. not because the users are “stupid” but because the nature of every company is to compromise your ability to make decisions based on sound judgment, and someone who already has their judgment impaired has no protection against that kind of manipulation.
I remember that. Man…. That makes me hate things.
yep.
fuck corporate interests.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.
“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”
Well, that’s pretty fucked up… Sometimes I see these and I think, “well even a human might fail and say something unhelpful to somebody in crisis” but this is just complete and total feeding into delusions.
That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.
That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.
That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.
What is an rlhf data set?
Reinforcement Learning from Human Feedback
It’s a method of fine-tuning and aligning LLMs which requires active human input
I would read that book.
You could ask Gemini to write it for you, but be careful it doesn’t start blending fact and fiction
It’s hard reading this while remembering that your electricity bills are increasing so that Google’s data centers can provide these messages to people.
Not that I want to defend AI slop, but what prompted these responses from Gemini?
Doesn’t matter what promped them.
I mean if Gemini was responding to some kind of roleplay then yeah it does. Not everyone doing shit with it has mental health problems. Some people are just fucking around.
The issue there is that it feeds into those mental health issues with efficiency and on on a scale never seen before. The models are programmed to agree with the user, and they are EXTREMELY HEAVILY ADVERTISED AND SHOVED ONTO PEOPLE AROUND THE WHOLE GLOBE DESPITE IT BEING WELL KNOWN HOW LIMITED AND PROBLEMATIC THE TECHNOLOGY IS WHILE THE CORPORATIONS DON’T TAKE ANY RESPONSIBILITY AT ALL. Anything from violating rights and privacy by gathering any and all data they can on you to situations like these where people hurt themselves (suicide, health advice, etc.) or others. But sure, let’s be ignorant, do some victim blaming and disregard the bigger picture there.
As a neurodivergent person, i’ve noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don’t know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that’s why it’s so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were…
Is that why I hated the entire thing at first blush? I was already keeping such an eye on myself to make sure my brain isn’t drifting I see the “come drift your brain” machine and went >:(
Reading about the ELIZA effect as well is a good way to understand how those who embrace “social norms” can be enamored by machine-generated statements without questioning them at all…
“On September 29, 2025, it sent him … the chatbot pretended to check it against a live database.
I usually don’t give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.
Edit: removed the quote since an other user posted it at the same time and it’s a bit of a wall of text to have twice.
It feels like there’s some burden for “don’t be evil” Google to provide evidence that this wasn’t an intentional test run, frankly.
Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.
Except in this case, Google is one of the companies promoting the chatbots to its users, telling them to trust them. They create TV ads telling people to talk to them. Today’s scammers are the stock market’s Magnificent Seven.
You sound jealous of my good fortune.
I would ask how I can emulate your rizz but then I remembered I can just ask an AI chatbot
Or believing that 72 virgins are waiting for you in the afterlife.
Or the old “citing Wikipedia” because aNyOnE cOuLd EdIt ThAt!
In a sane universe people would be on trial for unleashing this shit on society.
You talking about gun manufacturers or opiod manufacturers?
This technology was not ready for release, yet they released it.
They do deserve to be sued, this was negligence.
















