Claude Mythos finds 271 vulnerabilities in Firefox, defenders may have a decisive advantage

MarketWhisper

Firefox安全漏洞

Mozilla announced on Tuesday that an early version of Anthropic’s Claude Mythos AI model, during internal testing, identified 271 security vulnerabilities in the Firefox browser, and all of them were patched within this week. While Mozilla said it was also surprised by the findings, it noted that the results suggest a fundamental shift may be underway in the cybersecurity landscape, and that defenders may be about to shrink attackers’ advantage—one that they have held for years.

From 22 to 271: Claude Mythos’s security capability leap

Mozilla previously tested another Anthropic model that, in an earlier version of Firefox, identified 22 security-sensitive vulnerabilities. The discovery of 271 vulnerabilities this time represents a major jump in scale.

Mozilla emphasized that all vulnerabilities found by the system could be found even by “top human researchers,” and that AI tools have not yet revealed entirely new categories of vulnerabilities that humans can’t understand. Its core advantage is that it greatly speeds up this process, enabling developers to quickly identify issues before attackers can exploit them.

Claude Mythos was released in March 2026. It is Anthropic’s most advanced model to date, and company internal materials describe it as a new model that goes beyond the earlier Opus series. In pre-release testing, it found thousands of previously unknown vulnerabilities across major operating systems and web browsers.

Project Glasswing: Why access is tightly controlled

Anthropic provides limited access to Claude Mythos through its “Glasswing Program” (Project Glasswing). The organizations currently approved to use it are limited to specific vetted technology companies such as Amazon, Apple, and Microsoft, with use cases restricted to software vulnerability scanning.

The rationale behind this strict control is as follows: testing by a UK AI safety research institute found that Claude Mythos can autonomously carry out complex web operations, including multi-stage enterprise network attack simulations without any human intervention. According to people familiar with the matter, even though the Trump administration had called for a halt to the use of Anthropic’s technology, the U.S. National Security Agency (NSA) has deployed and is running a preview version of Claude Mythos on classified networks.

A double-edged sword: The same capabilities can accelerate cyberattacks

The results Mozilla found have far-reaching implications on both sides. Security researchers warn that AI systems that can analyze code at scale can automatically identify exploitable vulnerabilities in widely used software. If it falls into the hands of bad actors, it will create an unprecedented cybersecurity threat for software companies and users—and may even give rise to a new generation of automated cyberattack forms.

Frequently Asked Questions

What types of issues are the 271 vulnerabilities Claude Mythos found in Firefox?

According to Mozilla, these are real security-sensitive vulnerabilities that “even top human researchers” can find. Mozilla said AI tools have not yet revealed entirely new categories of vulnerabilities that humans can’t understand. However, their advantage lies in how far faster they can conduct large-scale systematic scanning than manual review, and all issues have been fully fixed within this week.

What is the purpose of the Glasswing Program, and which organizations can use Claude Mythos?

The Glasswing Program is Anthropic’s controlled-access program. Currently, only a limited number of vetted technology companies such as Amazon, Apple, and Microsoft are allowed to use Claude Mythos for limited purposes, with use restricted to software security vulnerability scanning. This restriction reflects Anthropic’s high level of caution about the dual-use risks of the model.

What are the broader, far-reaching implications of this discovery for the cybersecurity industry as a whole?

Mozilla said the emergence of AI tools may give defenders, for the first time, an opportunity to shrink attackers’ long-held advantage and achieve “decisive victory.” However, researchers also warn that the same capabilities can be used by attackers as well, accelerating the scale and efficiency of automated cyberattacks. Therefore, controlling access to AI security tools is crucial.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek V4 Pro with Ollama Cloud: One-click integration with Claude Code

According to an Ollama tweet, DeepSeek V4 Pro was released on 4/24, has been added to the Ollama catalog in cloud mode, and can call tools like Claude Code, Hermes, OpenClaw, OpenCode, Codex, etc. with just a single line of command. V4 Pro: 1.6T params, 1M context, Mixture-of-Experts; cloud inference does not download local weights. If you want to run it locally, you need to obtain the weights yourself and run it with INT4/GGUF and multi-card GPUs. Early speed tests were affected by cloud load; typical performance is about 30 tok/s, with a peak of 1.1 tok/s. It is recommended to use the cloud prototype first, and for production later, run inference yourself or use a commercial API.

ChainNewsAbmedia14m ago

DeepSeek Cuts V4-Pro Prices by 75%, Slashes API Cache Costs to One-Tenth

Gate News message, April 27 — DeepSeek announced a 75% discount on its new V4-Pro model for developers and reduced input cache hit prices across its API lineup to one-tenth of previous levels. The V4 model, released on April 25 in Pro and Flash versions, has been optimized for Huawei's Ascend

GateNews16m ago

Coachella turns to Google’s DeepMind AI to reimagine concerts beyond the stage

Coachella has partnered with Google DeepMind to test new AI tools that reshape how live music performances are created and experienced. Summary Coachella has tested AI tools with Google DeepMind to turn live performances into interactive digital environments. Three prototypes were built,

Cryptonews23m ago

Guo Ming-chi: OpenAI wants to build an AI Agent phone; MediaTek, Qualcomm, and Luxshare Precision are key in the supply chain

Guo Ming-chi claims that OpenAI is working with MediaTek, Qualcomm, and Luxshare Precision to develop an AI Agent phone, with mass production expected in 2028. The new phone will be centered on task completion: an AI agent will understand and execute requests, combining cloud and on-device computing, with a focus on sensing and contextual understanding. The specifications and supply chain list are expected to be finalized in 2026–2027; if it takes shape, it could bring a new upgrade cycle to the high-end market, and Luxshare may become a major beneficiary.

ChainNewsAbmedia32m ago

IEA: AI infrastructure spending has already surpassed investment in oil and gas production, and is expected to increase another 75% in 2026

According to analysis and market data published by the International Energy Agency (IEA) on April 26, the combined capital expenditures of the world’s top five technology companies in 2025 exceed $400 billion, with most of the spending going toward building AI infrastructure. The scale has already surpassed the annual investment level of global oil and natural gas production. The IEA estimates that the related capital expenditures may further increase by 75% in 2026.

MarketWhisper1h ago

Senator Bernie Sanders Issues Warning on AI's Existential Threat

Sanders stressed that even as most AI scientists acknowledge the possibility of AI escaping control and becoming a danger to our existence, no major measures have been taken to avoid it. “We must make certain that Al benefits humanity, not hurts us,” he stated. Key Takeaways: Bernie Sanders

Coinpedia1h ago
Comment
0/400
No comments