Item 1 of 2 Open AI and Anthropic logos are seen in this illustration taken on September 12, 2025. REUTERS/Dado Ruvic/Illustration
Dec 3 (Reuters) – The safety practices of major artificial intelligence companies, such as Anthropic, OpenAI, xAI and Meta, are “far short of emerging global standards,” according to a new edition of Future of Life Institute’s AI safety index released on Wednesday.
The institute said the safety evaluation, conducted by an independent panel of experts, found that while the companies were busy racing to develop superintelligence, none had a robust strategy for controlling such advanced systems.
Sign up here.
“Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” said Max Tegmark, MIT professor and Future of Life president.
A Google DeepMind spokesperson said the company will “continue to innovate on safety and governance at pace with capabilities” as its models become more advanced, while xAI said “Legacy media lies”, in what seemed to be an automated response.
Anthropic, OpenAI, Meta, Z.ai, DeepSeek, and Alibaba Cloud did not immediately respond to requests for comment on the study.
Reporting by Zaheer Kachwala in Bengaluru, additional reporting by Arnav Mishra in Bengaluru; Editing by Shinjini Ganguli
Our Standards: The Thomson Reuters Trust Principles.