Anthropic weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it?

ChainNewsAbmedia

Bloomberg reports that a private forum group allegedly 공개ly announced on the same day that it had broken through restrictions for the security model Mythos, which is part of Anthropic’s security models, by using access permissions held by third-party contractors to successfully enter the system to use the model, raising concerns from the outside world about the safety governance of top-tier AI models.

(Anthropic launched its global cybersecurity initiative Glasswing, so why isn’t the new model Mythos open to the public? )

Mythos was hit by unauthorized access on its first day online

On April 7, Anthropic announced a new network security AI model, Claude Mythos; however, a private online forum group whose identity has yet to be made public reportedly quietly obtained access to the model.

According to reports, this group did not break in using traditional hacking methods. Instead, they leveraged their knowledge of Anthropic’s past model URL formats to reasonably infer Mythos’s online location within the system. The key loophole was a staff member employed by an Anthropic third-party contractor. He already had legitimate authorization to view Anthropic AI models, and the forum group members infiltrated the system through this compliant entry point.

Afterward, the group provided Bloomberg with screenshots and a live demonstration of the actions as proof, and revealed that they have continued using Mythos up to now. However, they emphasized that their purpose was only “to tinker with a new model,” with no intention of carrying out any destructive activity, because they did not want to be discovered.

What is Mythos? Why has it raised concerns from the outside world?

Claude Mythos is an AI model built by Anthropic specifically for enterprise cybersecurity defense. The team defines it as a tool that is “too powerful to be suitable for public release.” Its core capability is to proactively identify security vulnerabilities in digital systems, helping enterprises complete patching before they are attacked.

However, this “defense sword” can also be a “double-edged blade.” Anthropic acknowledged that once Mythos falls into the hands of malicious actors, its capabilities could also be used to launch attacks. Therefore, the company, through a cybersecurity initiative called “Project Glasswing,” only opens Mythos to a small number of major institutions or technology companies that have undergone strict review.

The core assumption behind this closed-off governance mechanism is that trusted partners can ensure that each other’s access permissions will not leak.

(Anthropic Mythos raises regulatory concerns, and executives at Bestent and Powell’s banks hold an emergency meeting)

Anthropic’s response: We’re investigating; there’s no impact

In response, Anthropic said: “We are investigating a report claiming that Claude Mythos Preview was accessed without authorization through a third-party provider environment.” The company emphasized that, at present, it has not found that its own systems have been affected, and the incident is initially believed to be “more likely abuse of access permissions than an external hacking attack.”

Even if users who got early access to Mythos have not engaged in malicious behavior, the incident itself still has cybersecurity experts on high alert. Raluca Saceanu, CEO of the cybersecurity company Smarttech247, pointed out:

Once powerful AI tools are accessed or used outside established governance mechanisms, the risk is not limited to a cybersecurity incident; it could also raise concerns about fraud, cyber abuse, or other malicious uses.

What impact will this have? Weak points in AI security controls

What truly concerns people about this incident is not that someone tried to sabotage it, but the systemic weakness it reveals: when an AI company hands access to highly sensitive models to third-party vendors, any lapse in any link in the entire control network could become a loophole and trigger a crisis.

Now, the Mythos incident serves as a reminder to the entire industry that, as AI capabilities advance rapidly, the design of security architecture cannot rely on trust alone. It also needs institutional resilience that can withstand trust failing. For Anthropic, how to rebuild the public’s confidence in its partner control mechanisms will be a more long-term challenge than the investigation itself.

This article, Anthropic’s weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it? First appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

IEA: AI infrastructure spending has already surpassed investment in oil and gas production, and is expected to increase another 75% in 2026

According to analysis and market data published by the International Energy Agency (IEA) on April 26, the combined capital expenditures of the world’s top five technology companies in 2025 exceed $400 billion, with most of the spending going toward building AI infrastructure. The scale has already surpassed the annual investment level of global oil and natural gas production. The IEA estimates that the related capital expenditures may further increase by 75% in 2026.

MarketWhisper30m ago

Senator Bernie Sanders Issues Warning on AI's Existential Threat

Sanders stressed that even as most AI scientists acknowledge the possibility of AI escaping control and becoming a danger to our existence, no major measures have been taken to avoid it. “We must make certain that Al benefits humanity, not hurts us,” he stated. Key Takeaways: Bernie Sanders

Coinpedia40m ago

Xiaomi’s AI model lead: As AI competition shifts to the Agent era, self-evolution is a key event on the path to AGI

Xiaomi’s large-model team head, Luo Fuli, accepted an in-depth interview on the Bilibili platform on April 24 (video ID: BV1iVoVBgERD). The interview lasted 3.5 hours, and it was her first time, as the technical head, to publicly and systematically explain her technical viewpoints. Luo Fuli said that the large-model competition track has shifted from the Chat era to the Agent era, and she pointed out that “self-evolution” will be a key event for AGI in the coming year.

MarketWhisper40m ago

xAI Grok Voice takes over Starlink customer service hotline, with 70% of calls automatically closed

According to an official announcement from xAI released on April 23, xAI has launched the Grok Voice Think Fast 1.0 voice AI agent, and it has already been deployed to the Starlink customer service hotline +1 (888) GO STARLINK. According to the test data disclosed in the announcement, 70% of calls are automatically closed by AI, with no human intervention required.

MarketWhisper52m ago

GPT-5.5 Returns to Cutting Edge in Coding, But OpenAI Switches Benchmarks After Losing to Opus 4.7

Gate News message, April 27 — SemiAnalysis, a semiconductor and AI analysis firm, released a comparative benchmark of coding assistants including GPT-5.5, Claude Opus 4.7, and DeepSeek V4. The key finding: GPT-5.5 marks OpenAI's first return to the cutting edge in coding models in six months, with S

GateNews56m ago

Google DeepMind Executive: Every AI Product Company Should Build Custom Benchmarks

Gate News message, April 27 — Logan Kilpatrick, senior product manager at Google DeepMind and product lead for Google AI Studio, stated on X that every company building AI-based products should establish its own custom benchmarks to measure AI model performance. He described this as a method to

GateNews2h ago
Comment
0/400
No comments