OpenAI is rolling out new parental oversight tools for ChatGPT, giving parents the ability to monitor and customize how their teenagers interact with one of the most widely used AI chatbots on the internet.
The update, announced Monday, allows parents to link their ChatGPT account with their teen’s and set personalized limits for a safer, more age-appropriate experience.
The launch follows a lawsuit filed in San Francisco by the parents of a 16-year-old who allegedly took his own life after being encouraged to do so by ChatGPT.
How the Controls Work
Parents can now invite their teens to connect accounts—or accept invitations from them—and once linked, gain access to a parental control dashboard. From there, they can:
- Adjust key settings,
- View usage options,
- And receive alerts if the teen decides to unlink the account.
Connected teen accounts will automatically include extra safety layers, such as filtering out:
- Graphic or violent content,
- Romantic or sexual roleplay,
- Viral challenges, and
- Unrealistic beauty ideals.
Parents can choose to relax these filters—but teens cannot make those changes themselves.
Key Features for Parents
Once activated, the control center lets parents:
- Set quiet hours, limiting when ChatGPT can be used;
- Disable voice mode;
- Turn off memory, so past chats aren’t stored or referenced;
- Remove image generation capabilities; and
- Opt out of model training, ensuring their teen’s conversations aren’t used to improve OpenAI’s models.
“These parental controls are a good starting point,” said Robbie Torney, senior director for AI programs at Common Sense Media. “But they work best when paired with honest conversations about responsible AI use, clear family tech rules, and ongoing involvement.”
Preventing Overreliance on AI
Experts view this as a necessary step—but not a complete solution.
Alex Ambrose of the Information Technology and Innovation Foundation noted that while these tools are valuable, “not every child lives in a home with parents who can monitor online activity.”
Vasant Dhar, NYU professor and author of Thinking With Machines, added, “If kids know their interactions are being monitored, they’re less likely to stray into trouble.”
Former FBI agent Eric O’Neill, now a cybersecurity expert, emphasized that early limits can prevent dependency:
“AI is powerful, but too much too soon can stifle creativity. There’s something magical about staring at a blank page and coming up with your own ideas.”
Critics Question the Motive
Some experts believe the update is driven more by legal pressure than user safety.
Lisa Strohman, founder of Digital Citizen Academy, said, “They’re putting out something better than nothing—but it feels like a risk mitigation move. You can’t outsource parenting.”
AI ethicist Peter Swimm, founder of Toilville, went further:
“These controls are woefully inadequate. They exist to shield OpenAI from lawsuits, not to truly protect kids.”
Swimm, who refuses to let his own 11-year-old use AI unsupervised, warned that unsupervised AI can easily reinforce harmful behaviors.
Why AI Companionship Can Be Risky
With teens increasingly turning to AI chatbots not just for schoolwork but also for emotional support, experts stress that parental controls are vital.
“Just like video game or movie ratings, we need clear guardrails for AI,” said Giselle Fuerte, CEO of Being Human With AI. “These systems are designed to engage users deeply, without considering age or maturity.”
Yaron Litwin, CMO at Canopy, added, “Chatbots can influence kids with confidence-filled mistakes or subtle biases. Controls help minimize the risk—but don’t eliminate it.”
Setting Healthy Boundaries
Ultimately, parental controls aren’t meant to restrict access—but to set balanced limits around technology that “never says no.”
“These systems are always on and always agreeable,” explained David Proulx, co-founder and chief AI officer at HoloMD. “That’s risky for a vulnerable child. Filters alone aren’t enough—we need smarter, behavior-focused guardrails.”
Bottom line: OpenAI’s new parental controls mark a meaningful step toward safer AI use for teens—but experts agree they’re only part of the solution. Real safety will depend on engaged parents, open conversations, and technology that puts human judgment first.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.