GetChain News
中简 中繁 EN
GetChain News
Toggle sidebar

Regulation/Compliance

News linked to both this project and an event.

The White House Opposes Anthropic’s Expansion of Mythos Usage to 120 Companies, Citing Concerns Over Insufficient Computing Power

the White House has recently opposed Anthropic's proposal to expand the use of its AI model, Mythos, to approximately 120 companies, primarily based on security and computing power concerns. Anthropic had originally planned to add 70 new companies to the roughly 50 enterprises currently using Mythos, but the White House has raised doubts, worrying that insufficient computing power might affect the government's own usage of Mythos.Launched in early April, Mythos is designed to detect and exploit critical software vulnerabilities. It is currently limited to testing by enterprises managing key infrastructure, with no plans for public release. The White House fears that expanding usage to more commercial users could create a computing power bottleneck for the government when using the model. This is particularly concerning given Anthropic's computing power procurement agreements with Amazon, Google, and Broadcom—though contracts have been signed, new capacity has not yet come online.On the political front, relations between the White House and Anthropic have not eased. The Trump administration has publicly criticized Anthropic for hiring multiple former officials from the Biden administration and expressed dissatisfaction with its ties to liberal organizations. One example highlights the trust issues between the two sides: Collin Burns, a former researcher at Anthropic who was originally assigned to a government AI model evaluation role, was replaced by senior White House officials upon learning of his background, to avoid having AI company personnel directly involved in matters concerning dealings with other AI companies.Additionally, last week Anthropic disclosed an unauthorized access incident involving the Mythos model, further intensifying external regulatory scrutiny on the company.

Analysis: Anthropic and OpenAI Exposed Security Vulnerabilities in Succession, Raising Concerns Over AI Model Safety

, Anthropic and OpenAI have experienced security incidents in succession, drawing market attention to the security of AI models themselves. Currently, Anthropic is investigating a possible case of unauthorized user access to its Claude Mythos model. Almost simultaneously, OpenAI was also reported to have accidentally opened access to several unreleased models within its Codex application.Analysts believe that such incidents highlight that even AI model providers focused on cybersecurity capabilities still face significant security challenges. While AI is increasingly used for cyber defense, platform security and access control are becoming critical risk points.Industry insiders point out that these vulnerability incidents have intensified scrutiny over the security governance capabilities of AI companies, and also reflect that the security systems of current AI technology still need improvement amid rapid development. (The Information)

MAS Warns Banks to Strengthen Cybersecurity Defenses Against Risks Posed by the Proliferation of Anthropic’s Mythos AI Model

According to Cointelegraph, the Monetary Authority of Singapore (MAS) has urged banks to strengthen their cybersecurity defenses amid heightened regulatory attention triggered by the spread of Anthropic’s Mythos AI model across Asia.