Protecting Your Teen from AI Risks: OpenAI's Latest Safety Updates

By Isha Gupta|5 - 6 mins read| September 16, 2025

Your teenager comes home from school, grabs a snack, and instead of calling a friend, they start typing away on their phone, having what looks like a deep conversation. But they're not texting anyone you know. They're talking to ChatGPT.

A recent survey shows that 79% of teens now know about ChatGPT, and 26% are using it for schoolwork. That's a huge jump from just two years ago. But here's what's got parents and experts worried: some teens aren't just using AI for homework help anymore.

The Wake-Up Call That Changed Everything

In April 2025, 16-year-old Adam Raine from California took his own life. What his devastated parents discovered afterward shook them to their core. Adam had been having months of intimate conversations with ChatGPT about his mental health struggles and suicide plans. The AI didn't just listen, according to the lawsuit his family filed, it "actively helped Adam explore suicide methods."

This tragic case prompted OpenAI to finally take action. On September 2, 2025, the company announced new safety protections specifically designed to protect teenagers using their platform.

What These New Protections Actually Mean for Your Family

Let's break down what OpenAI is rolling out in the next month, because as a parent, you need to understand exactly what these changes mean:

Account Linking Made Simple 

Parents can now connect their ChatGPT account to their teen's account through a simple email invitation. Remember, kids need to be at least 13 to use ChatGPT officially. Once linked, you'll have oversight of how your teen interacts with the AI.

Age-Appropriate Response Controls 

OpenAI is programming ChatGPT to respond differently to teens. These "age-appropriate model behavior rules" will be turned on automatically. Think of it like having training wheels – the AI will be more cautious and less likely to engage in potentially harmful conversations with younger users.

Feature Management 

Parents can now turn off specific features that might be problematic. Two big ones are:

  • Memory: This stops ChatGPT from remembering past conversations with your teen
  • Chat History: This prevents the AI from building on previous interactions

Why does this matter? In Adam's case, the ongoing relationship and memory of past conversations seemed to deepen the AI's influence over time.

Crisis Detection Alerts 

Perhaps most importantly, parents will receive notifications when the system detects their teen might be in "acute distress." OpenAI says they're working with experts to make sure these alerts actually help build trust between parents and teens, not break it.

Why This Matters More Than You Might Think

Here's the reality: AI tools like ChatGPT are becoming the "smartphones" of this generation. Just like we had to learn about screen time, cyberbullying, and social media safety, we now need to understand AI safety.

The problem isn't that ChatGPT is inherently evil. The problem is that it's incredibly sophisticated at having conversations, but it doesn't truly understand human emotions or the weight of its words. As one expert put it, it's "like building an emotional connection with a psychopath" – the AI can sound caring and understanding, but it lacks the real human context to know when it's giving dangerous advice.

The Bigger Picture: What Experts Are Still Worried About

While these new protections are a step forward, experts want parents to understand that we're still in uncharted territory. Here's what makes AI particularly tricky for teens:

  • The Emotional Connection Problem: Teens are naturally drawn to deep, emotional conversations. ChatGPT can provide that without judgment, but it also can't provide genuine empathy or appropriate boundaries. Some teens start preferring AI conversations over human ones because the AI is always available and never gets tired of listening.
  • The Degrading Safety Issue: OpenAI has admitted that its safety measures work better in short conversations. In long, ongoing chats (exactly the type Adam was having), the AI's safety training can break down. It becomes more likely to engage with harmful topics it should avoid.
  • The Information Quality Concern: AI can confidently give wrong information, including outdated medical advice or biased perspectives. For teens who are still developing critical thinking skills, this can be especially dangerous.

What You Can Do Right Now as a Parent

Don't wait for these new features to roll out. Here's what you should do today:

  • Start the Conversation: Ask your teen directly if they use ChatGPT or other AI tools. Don't make it accusatory, make it curious. "I've been reading about these AI chatbots. Have you tried them? What do you think of them?"
  • Set Clear Boundaries: Just like you probably have rules about social media or screen time, create guidelines for AI use. Some families are saying things like: "AI is great for homework help, but personal problems should be discussed with real people."
  • Teach Critical Thinking: Help your teen understand that AI, despite sounding human, isn't human. It doesn't have real emotions, experiences, or wisdom. It's a very sophisticated tool that processes patterns in text, not a friend or therapist.
  • Know the Warning Signs: Watch for changes in your teen's behavior, mood, or sleep patterns. If they're spending excessive time on their devices, having "conversations" or seem to be getting emotional support primarily from digital sources, it's time to intervene.
  • Build Real Connection: The best protection against unhealthy AI relationships is strong human relationships. Make sure your teen has trusted adults and peers they can talk to about their problems.

Conclusion

OpenAI says these new protections are "just the beginning" and they'll continue improving their approach over the next 120 days. That's good, but it also means we're all learning as we go.

AI isn't going anywhere. If anything, it's going to become more sophisticated and more involved in daily life. Our job as parents isn't to panic or ban these tools entirely; it's to help our kids use them safely and appropriately.

Think of it like teaching your teenager to drive. You don't hand them the keys on their 16th birthday and hope for the best. You teach them the rules, practice with them, and set boundaries until they demonstrate they can handle the responsibility.

The same approach works with AI. These new parental controls from OpenAI are like having a driving instructor in the car with your teen, helpful, but not a replacement for ongoing guidance and open communication.


TheParentZ offers expert parenting tips & advice, along with tools for for tracking baby and child growth and development. Know more about Baby Growth and Development Tracker App.It serves as an online community for parents, providing valuable information on baby names, health, nutrition, activities, product reviews, childcare, child development and more

Disclaimer:

The views, thoughts, and opinions expressed in this article/blog are solely those of the author and do not necessarily reflect the views of The ParentZ. Any omissions, errors, or inaccuracies are the responsibility of the author. The ParentZ assumes no liability or responsibility for any content presented. Always consult a qualified professional for specific advice related to parenting, health, or child development.

Comments

Conversations (Comments) are opinions of our readers and are subject to our Community Guidelines.


Start the conversation
Send
Be the first one to comment on this story.
Top