News linked to both this project and an event.
the White House has recently opposed Anthropic's proposal to expand the use of its AI model, Mythos, to approximately 120 companies, primarily based on security and computing power concerns. Anthropic had originally planned to add 70 new companies to the roughly 50 enterprises currently using Mythos, but the White House has raised doubts, worrying that insufficient computing power might affect the government's own usage of Mythos.Launched in early April, Mythos is designed to detect and exploit critical software vulnerabilities. It is currently limited to testing by enterprises managing key infrastructure, with no plans for public release. The White House fears that expanding usage to more commercial users could create a computing power bottleneck for the government when using the model. This is particularly concerning given Anthropic's computing power procurement agreements with Amazon, Google, and Broadcom—though contracts have been signed, new capacity has not yet come online.On the political front, relations between the White House and Anthropic have not eased. The Trump administration has publicly criticized Anthropic for hiring multiple former officials from the Biden administration and expressed dissatisfaction with its ties to liberal organizations. One example highlights the trust issues between the two sides: Collin Burns, a former researcher at Anthropic who was originally assigned to a government AI model evaluation role, was replaced by senior White House officials upon learning of his background, to avoid having AI company personnel directly involved in matters concerning dealings with other AI companies.Additionally, last week Anthropic disclosed an unauthorized access incident involving the Mythos model, further intensifying external regulatory scrutiny on the company.
, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)
According to Decrypt, Mozilla recently revealed that Anthropic’s latest AI model, Claude Mythos, identified 271 security vulnerabilities during internal testing of the Firefox browser; all related vulnerabilities were patched this week. For comparison, a previous Anthropic model had detected only 22 security-sensitive vulnerabilities. Mozilla stated that all discovered vulnerabilities fell within the scope of what top human researchers could identify. Claude Mythos was officially launched in March 2026 and is Anthropic’s most powerful model to date for reasoning, coding, and cybersecurity. It is currently available exclusively to vetted partners—including Amazon, Apple, and Microsoft—under Anthropic’s “Project Glasswing” initiative.
Odaily News Anthropic has decided to restrict the public release of its Mythos model due to its highly automated cyber attack capabilities. Reports indicate that during internal testing, the model was already capable of independently completing vulnerability discovery and exploitation processes, and generating multi-step attack plans.Informed sources stated that in early testing, Mythos could autonomously build intrusion tools targeting Linux systems and, with guidance, execute complex vulnerability chain attacks. These capabilities were assessed as potentially posing risks to global infrastructure.Anthropic's management ultimately positioned Mythos as a cyber defense tool and opened it for testing to select institutions in a restricted manner. Industry insiders pointed out that similar models could significantly enhance the efficiency of cyber offense and defense, while also potentially introducing new security challenges. (Bloomberg)