No Comments

22 Firefox Security Vulnerabilities Found Through AI Research

Artificial Intelligence, Character Brain Control, Featured Image for Impreza Host News, made by Impreza Team, 2026

On Friday, Anthropic revealed that it discovered 22 new security vulnerabilities in the Firefox web browser through a security partnership with Mozilla.

Specifically, researchers classified 14 vulnerabilities as high severity, seven as moderate, and one as low severity. Mozilla addressed these issues in Firefox 148, which it released late last month. Notably, Anthropic identified the vulnerabilities during a two-week testing period in January 2026.

Claude Opus 4.6 Identifies High-Severity Bugs

Furthermore, the artificial intelligence (AI) company noted that the number of high-severity bugs discovered by its Claude Opus 4.6 large language model (LLM) represents “almost a fifth” of all high-severity vulnerabilities patched in Firefox throughout 2025.

During the analysis, the Claude Opus 4.6 model detected a use-after-free bug in Firefox’s JavaScript engine after “just” 20 minutes of exploration. Subsequently, a human researcher validated the finding in a virtualized environment to rule out the possibility of a false positive.

“By the end of this effort, we had scanned nearly 6,000 C++ files and submitted a total of 112 unique reports, including the high- and moderate-severity vulnerabilities mentioned above,” the company said. “Most issues have been fixed in Firefox 148, with the remainder to be fixed in upcoming releases.”

In addition, the AI startup provided its Claude model with access to the complete list of vulnerabilities submitted to Mozilla. Researchers then tasked the AI system with developing practical exploits for those flaws.

Despite running the test several hundred times and spending about $4,000 in API credits, the company reported that Claude Opus 4.6 successfully converted the security flaw into a working exploit in only two cases.

According to Anthropic, this outcome highlights two important insights. First, identifying vulnerabilities costs significantly less than developing exploits. Second, the AI model demonstrates stronger performance in vulnerability discovery than in exploitation.

“However, the fact that Claude could succeed at automatically developing a crude browser exploit, even if only in a few cases, is concerning,” Anthropic emphasized.

However, the company clarified that the exploits worked only within its controlled testing environment, where researchers had intentionally removed certain security protections such as sandboxing.

Task Verifier Improves Exploit Development

To strengthen the testing process, Anthropic integrated a task verifier into the workflow. This component determines whether a generated exploit actually works, providing real-time feedback as the AI explores the target codebase.

As a result, the system can iterate and refine its results until it produces a functional exploit.

One example involved CVE-2026-2796, a vulnerability with a CVSS score of 9.8. Researchers described the flaw as a just-in-time (JIT) miscompilation affecting the JavaScript WebAssembly component.

Meanwhile, the disclosure arrived weeks after Anthropic introduced Claude Code Security, a new tool currently available in limited research preview. The system aims to help developers fix vulnerabilities using an AI agent.

“We can’t guarantee that all agent-generated patches that pass these tests are good enough to merge immediately,” Anthropic said. “But task verifiers give us increased confidence that the produced patch will fix the specific vulnerability while preserving program functionality—and therefore achieve what’s considered to be the minimum requirement for a plausible patch.”

Mozilla Highlights Power of AI-Assisted Security Analysis

In a coordinated announcement, Mozilla also confirmed that the AI-assisted approach uncovered 90 additional bugs, most of which engineers have already fixed.

These issues included assertion failures, which often overlap with vulnerabilities traditionally discovered through fuzzing, as well as distinct logic errors that fuzzing tools failed to detect.

“The scale of findings reflects the power of combining rigorous engineering with new analysis tools for continuous improvement,” the browser maker said. “We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition to security engineers’ toolbox.”

 


Source: TheHackerNews

Read more at Impreza News

You might also like

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.