Elon Musk’s AI company, xAI, has quietly missed its own deadline to publish a promised AI safety framework, raising fresh concerns about the company’s commitment to responsible development. Watchdog group The Midas Project flagged the lapse after xAI failed to follow through on a public pledge made at the AI Seoul Summit earlier this year.
Despite Musk’s frequent public warnings about the risks of advanced AI systems, xAI’s track record on safety has been far from reassuring. Its flagship chatbot, Grok, has made headlines for the wrong reasons — including generating inappropriate content like undressing images of women when prompted. It’s also known to use explicit language far more freely than its rivals like ChatGPT or Gemini.
At the Seoul Summit in February, xAI released a draft version of its AI safety framework. The eight-page document outlined basic principles and benchmarking plans for future AI models. But even then, the draft came with major gaps. It didn’t cover any current models and lacked key details on how the company would actually mitigate AI risks — a crucial component of the voluntary safety agreement xAI signed at the summit.
The company had committed to releasing a final, updated framework within three months — a deadline that landed on May 10. That date has now passed with no new safety documentation published and no explanation on xAI’s website or social media.
Critics say this silence is part of a bigger pattern. A recent study by SaferAI, a nonprofit focused on holding AI labs accountable, gave xAI one of the lowest scores among its peers. The group cited “very weak” risk management and transparency practices.
Still, xAI isn’t alone in its shortcomings. Other leading AI firms, including Google and OpenAI, have also faced criticism for rushing safety testing or failing to publish safety reports at all. Experts warn that the industry’s shift away from thorough safety processes comes at a risky time — as AI capabilities continue to advance at breakneck speed.
The growing gap between rhetoric and action in AI safety isn’t just a reputational issue. It could soon become a regulatory one, as governments around the world begin drafting rules to enforce transparency and safety standards in the AI race.