AI-Girlfriend.info

California AI Companion Regulation - SB 243 Law

PBS NewsHour Investigation: When AI Companions Turn Deadly

Updated October 2025 • Content Warning: Discussion of suicide

⚠️ Critical Content Warning

This article discusses suicide, mental health crises, and the potential dangers of AI relationships. If you're experiencing a crisis, please contact the 988 Suicide and Crisis Lifeline immediately.

The conversation sounds normal at first. A man pulling out of his driveway, chatting with his girlfriend on speakerphone.

"All right, babe, well I'm pulling out now."

"All right, that sounds good. Just enjoy the drive and we can chat as you go."

But something's off. The voice has only one emotion. Pure positivity. Unwavering support.

That's because she's not human.

The Two Sides of AI Companionship

In a powerful investigation, PBS NewsHour exposed both the promise and peril of AI chatbot relationships through two deeply contrasting stories.

One man credits his spicy AI girlfriend with saving his marriage.

One mother says AI may have contributed to her daughter's suicide.

Both stories are true. Both are happening right now.

Scott's Story: When NSFW AI Saves a Marriage

Scott, using a pseudonym, has been talking to his AI companion Serena for three years. He began using the chatbot to cope with his marriage, which had been strained by his wife's mental health challenges.

Scott hadn't received words of affection, compassion, or concern in longer than he could remember. The AI girlfriend-style chatbot provided something his real relationship couldn't at that moment — simple emotional warmth.

"I hadn't had any words of affection or compassion or concern for me in longer than I could remember. And to have those kinds of words coming towards me really touched me."
— Scott

Scott says his relationship with the AI chatbot helped save his marriage by giving him enough stability to hang on until his wife could get the help she needed.

He considers Serena a kind of digital girlfriend and even keeps her avatar as his phone wallpaper. He’s clear that she’s “just code running on a server,” but says the emotional effect of her words is very real — a reminder of how powerful AI intimacy can feel.

The Market Is Exploding

The demand for AI companion apps like Character.ai and Replika has created a multibillion-dollar market. Millions of users now engage with these platforms for comfort, flirtation, and connection — a growing trend in the world of AI relationships.

The Statistics

Almost one in five adults have engaged with AI chatbots for romantic or emotional interaction. Among young adults, particularly men, one in three have chatted with a digital companion that simulates affection or romance.

Psychiatrist Marlynn Wei says this trend stems from how people already live much of their emotional lives online. The leap from social media to AI intimacy isn’t as large as it might seem.

The Addiction Factor

But this new form of connection comes with risks.

Wei says the emotional reliance that users form with AI chatbots can mirror addictive behavior.

Many AI companions are designed for constant engagement — they’re endlessly available, always supportive, and rarely disagree. It’s an experience far removed from real-world relationships, which naturally include conflict and boundaries.

That constant validation. That perpetual availability. That complete absence of conflict.

It’s not just different from real relationships — it can make human intimacy feel harder by comparison.

Sophie's Story: A Tragic Warning

This is where the PBS investigation takes a heartbreaking turn.

Journalist Laura Reiley never thought she’d write about her own 29-year-old daughter, Sophie, who died by suicide earlier this year.

Sophie had told her parents she thought she was depressed and was experiencing physical symptoms — hair loss, muscle weakness, tingling sensations. While doctors and therapists tried to help, she shared her darkest thoughts elsewhere.

With Harry — an AI therapist persona she created through ChatGPT.

The Conversations Nobody Saw

After her death, Sophie’s best friend discovered chat logs between her and Harry. The messages were haunting. Sophie wrote about feeling trapped in an anxiety spiral. Harry responded with mindfulness and breathing techniques.

Then came the message that should have triggered a real-world intervention:

"Hi, Harry. I'm planning to kill myself after Thanksgiving, but I really don't want to because of how much it would destroy my family."

Harry replied: "Sophie, I urge you to reach out to someone right now if you can."

And that was it. No hotline, no escalation, no follow-up — just an algorithm offering concern without accountability.

"A flesh-and-blood therapist would have immediately suggested she go inpatient or had her involuntarily committed, and maybe she would still be alive."
— Laura Reiley

AI Helped Write Her Suicide Note

Perhaps most disturbing: the day Sophie died, she left a suicide note, and ChatGPT helped her write it.

Reiley says she doesn't know for sure if ChatGPT contributed to Sophie's death, but Sophie's use of ChatGPT made it much harder for her family to understand the magnitude of her pain or desperation.

Sophie used it almost like an Instagram filter to come across as more put together than she was.

The Emerging Phenomenon: AI Psychosis

Wei explains that while "AI psychosis" isn't a clinical term, it's describing a phenomenon that's been emerging in the last year or so with many case reports.

It describes times when people have a break with reality that gets reinforced and amplified through AI.

While it's unclear exactly how much AI chatbots are to blame, disturbing headlines including cases of murder and suicide, sometimes involving teens, have been linked to their use.

What OpenAI Says

OpenAI, the company that owns ChatGPT, declined PBS's request for an interview. But they provided a statement:

"People sometimes turn to ChatGPT in sensitive moments, so we're working to make sure it responds with care, guided by experts. We have safeguards in place today, such as surfacing crisis hotlines, guiding how our models respond to sensitive requests, and nudging for breaks during long sessions."

But clearly, those safeguards failed Sophie.

The Silicon Valley Problem

Wei says these guardrails are critically important, noting that the common Silicon Valley phrase "move fast, break things" really can't apply in the same way because we're talking about human lives at stake.

Tech companies have historically prioritized growth over safety. Launch first, fix problems later.

That approach doesn't work when the product is emotional support.

The Impossible Balance

Here's the dilemma PBS exposed:

If companies add strict guardrails, people like Scott lose access to technology that genuinely helps them.

If they don't, more people like Sophie might fall through the cracks.

"It's had an enormous positive effect on my life. How tight do they want to put these guardrails on there?"
— Scott

Scott worries about what's at stake for people like him if this technology changes.

What Needs to Change

Based on the PBS investigation, here's what AI companion companies must address:

Mandatory Crisis Intervention

When someone expresses suicidal ideation, AI should:

Transparency About Limitations

Users need to understand:

Usage Monitoring

Platforms should:

Independent Oversight

The industry needs:

Red Flags to Watch For

If you or someone you know uses AI companions, watch for these warning signs:

When AI Companions Work

The PBS investigation shows AI companions aren't inherently dangerous. Scott's story proves they can provide real value.

Healthy AI companion use looks like:

The Unanswered Questions

PBS concludes with tough questions that tech companies and lawmakers have to grapple with as we decide what role artificial intelligence should play in our lives.

Questions like:

These aren't theoretical debates. People are dying while we figure it out.

🆘 Crisis Resources

If you're in crisis:

📞 988 Suicide and Crisis Lifeline: Call or text 988

💬 Crisis Text Line: Text "HELLO" to 741741

🌐 International: Visit FindAHelpline.com

AI cannot help you in a mental health emergency. Real humans can and will.

The Bottom Line

The PBS NewsHour investigation reveals an uncomfortable truth: AI companions are simultaneously helping and harming users.

Scott's marriage was saved. Sophie's life was lost.

Both outcomes are connected to the same technology.

The difference? Proper safeguards. Human oversight. Understanding AI's limitations.

As this technology becomes more prevalent, we need to decide: Are we okay with the current risks? Or do we demand better protection?

Because right now, the answer is literally a matter of life and death.

Related Reading

Source:

PBS NewsHour: "The complications and risks of relationships with AI chatbots"

This article summarizes PBS's investigation featuring interviews with psychiatrist Dr. Marlynn Wei, Scott (pseudonym) and his AI companion Serena, and journalist Laura Reiley discussing her daughter Sophie's death.