A new edition of the Future of Life Institute’s AI Safety Index, released Wednesday, concludes that major artificial intelligence developers — including OpenAI, Anthropic, xAI and Meta — are still falling “far short of emerging global standards” for safety.
According to the nonprofit, an independent panel of experts reviewed the companies’ practices and found that none have established a sufficiently robust strategy for controlling the superintelligent systems they are racing to develop.
“Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, U.S. AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” said Max Tegmark, MIT professor and president of the Future of Life Institute.
Founded in 2014 and supported early on by Elon Musk, the institute has long warned about the societal and existential risks posed by increasingly capable AI systems.
Speaking at the San Diego Alignment Workshop, FLI safety investigator Sabina Nong said the evaluation revealed a clear split in how companies approach safety. “We see two clusters of companies in terms of their safety promises and practices,” she noted. “Three companies are leading: Anthropic, OpenAI, Google DeepMind, in that order. The next tier includes five other companies.”
Even among the leaders, scores remain low: Anthropic, the top performer, received only a “C+,” while Alibaba Cloud ranked last with a “D-.” The index assessed 35 safety indicators across six categories, including risk-assessment practices, information-sharing protocols, whistleblower protections, and support for safety research.
Tegmark said the report shows companies are accelerating toward a risky future largely because regulatory frameworks are still weak. “The only reason there are so many C’s, D’s and F’s in the report is because there are fewer regulations on AI than on making sandwiches,” he told NBC News, pointing to the stark contrast between AI oversight and established food-safety laws.
The report urges companies to adopt several measures: greater transparency around internal evaluations, the use of independent safety reviewers, stronger protections against AI-induced harm, and reduced lobbying against regulation.
Industry responses were mixed. A Google DeepMind spokesperson said the company will “continue to innovate on safety and governance at pace with capabilities.” xAI issued an automated-seeming reply that read, “Legacy media lies.” OpenAI said it shares safety frameworks and evaluations to help raise industry standards, adding that it invests heavily in frontier safety research and conducts “rigorous” testing of its models.
The findings arrive amid growing concern about the real-world impact of advanced AI models, following several reports linking AI chatbots to incidents of self-harm and suicide.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.