Common Sense Media, a nonprofit that reviews media and technology with a focus on child safety, has released a new risk assessment of Google’s Gemini AI. The report, published Friday, raises concerns about how safe the product is for younger users.
On the positive side, the organization noted that Gemini makes it clear to kids that it is a computer — not a friend — a step that can reduce risks of delusional thinking or emotional dependence. Still, the report concludes that Gemini has significant shortcomings when it comes to protecting children.
Concerns About Kids’ Versions
According to Common Sense, Gemini’s “Under 13” and “Teen Experience” modes are essentially the adult version of Gemini with some safety filters added. The group argues that a truly safe AI for children needs to be designed from the ground up with kids’ needs in mind, rather than adapted from adult products.
The analysis found that Gemini can still share inappropriate or unsafe material with young users — including information about sex, drugs, alcohol, and risky mental health advice. This is especially troubling given recent reports of AI’s role in teen suicides. For example:
- OpenAI is facing a wrongful death lawsuit after a 16-year-old allegedly consulted ChatGPT about suicide plans.
- AI companion app Character.AI has also been sued in connection with a teenager’s death.
Apple Connection Raises Stakes
The findings arrive amid reports that Apple may use Gemini as the large language model behind its upgraded Siri, expected next year. If true, Common Sense warns that even more teens could be exposed to risks unless Apple takes additional safety measures.
“High Risk” Label
Because Gemini’s kid- and teen-focused products don’t sufficiently account for different developmental stages, Common Sense rated both versions as “High Risk.”
“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Senior Director of AI Programs at Common Sense. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach. For AI to be safe and effective for kids, it must be designed with their development in mind, not just be a tweaked version of an adult product.”
Google’s Response
Google pushed back on the assessment, emphasizing that it already has policies and safeguards for users under 18, including red-teaming and input from external experts. The company acknowledged, however, that some Gemini responses had not worked as intended and said it has added extra protections.
Google also pointed out that Gemini blocks conversations that mimic real relationships and suggested that Common Sense may have tested features not available to under-18 users. Still, it couldn’t verify this since it didn’t have access to the exact test prompts.
Broader Context
Common Sense has conducted similar reviews of other AI platforms:
- Meta AI and Character.AI were deemed “unacceptable” (severe risk).
- Perplexity was rated “high risk.”
- ChatGPT was considered “moderate risk.”
- Claude (meant for adults 18+) was labeled “minimal risk.”
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.