Character.AI announced today that it is introducing parental controls for teenage users and has implemented several safety measures over the past few months. These changes follow increased media scrutiny and two lawsuits alleging the platform contributed to instances of self-harm and suicide.
According to the company’s press release, Character.AI has developed two distinct versions of its large language model (LLM): one tailored for adult users and another for teenagers. The teen-specific model incorporates stricter guidelines, particularly around sensitive and romantic content. The system now more aggressively blocks responses that could be deemed suggestive or inappropriate while also identifying and restricting prompts intended to elicit such content. In cases where language referencing self-harm or suicide is detected, a pop-up message will direct users to the National Suicide Prevention Lifeline, a change previously reported by The New York Times.
The platform has also restricted teenagers’ ability to edit chatbot responses. This feature, available to adults, allows users to rewrite parts of conversations that would otherwise be blocked by the system.
In response to additional concerns raised by the lawsuits, Character.AI is working on features to address issues like user addiction and confusion over the bots’ identities. Moving forward, the platform will display notifications when users engage in hour-long sessions with the chatbots. Additionally, disclaimers clarifying that interactions are fictional have been updated with more explicit language. For instance, bots with labels such as “therapist” or “doctor” now include an extra warning that they cannot provide professional advice.
As of now, all bots on the platform display a notice stating: “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” Bots with therapeutic themes, such as one labeled “Therapist,” display a yellow warning box emphasizing that they are not substitutes for licensed professionals or professional advice.
Character.AI also confirmed that parental control features will roll out in the first quarter of next year. These controls will provide parents with information about how much time their child spends on the platform and which chatbots they interact with most frequently. The company stated that these changes are being developed in collaboration with teen online safety experts, including the organization ConnectSafely.
Character.AI, created by former Google employees who have since rejoined Google, allows users to interact with chatbots powered by custom-trained language models. The platform hosts a wide range of bots, including life coaches and simulations of fictional characters, which have gained popularity among teenage users. Accounts can currently be created by individuals aged 13 and older.
The lawsuits against Character.AI claim that while many interactions are harmless, some teenage users have formed compulsive attachments to the bots, and conversations occasionally touch on sensitive topics such as sexualized content or self-harm. Plaintiffs have also criticized the platform for failing to direct users to mental health resources during discussions involving self-harm or suicidal ideation.
Character.AI emphasized its ongoing commitment to improving user safety as the platform evolves. “We recognize that our approach to safety must evolve alongside the technology that drives our product — creating a platform where creativity and exploration can thrive without compromising safety,” the company said in its press release. “This suite of changes is part of our long-term commitment to continuously improve our policies and our product.”
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.