Anthropic Announces Claude Mythos Preview — Its Most Powerful AI Model Yet

Key Points
- Anthropic announced Claude Mythos Preview on April 7, 2026, describing it as a "step-change" in AI capability
- The model will not be released to the public due to its advanced cybersecurity capabilities
- Access is restricted to 12 founding partners through Project Glasswing, a controlled, defensive-only initiative
- Mythos Preview sits in a new "Capybara" tier above the existing Claude Opus product line
- Anthropic simultaneously published a detailed system card outlining the model's capabilities and safety evaluations
What Was Announced
On April 7, 2026, Anthropic publicly confirmed what leaked documents had hinted at weeks earlier: the company has developed an AI model that represents a generational leap in capability, particularly in autonomous cybersecurity tasks.
Claude Mythos Preview — the official name for a model internally codenamed "Capybara" — breaks from Anthropic's established pattern of progressively releasing more capable Claude models to the public. Instead, Anthropic announced that the model would be made available only through a new controlled-access program called Project Glasswing.
The Capybara Tier
According to Anthropic's announcement, Mythos Preview occupies a new tier in the Claude model hierarchy, above the existing Opus line. The leaked internal documents that surfaced in late March 2026 referred to this tier by the codename "Capybara," distinguishing it from the "Sonnet" and "Opus" tiers that Claude users are familiar with.
The naming signals Anthropic's assessment that this isn't simply a better version of an existing model — it's a fundamentally different class of capability. The benchmark data supports this framing, with Mythos Preview achieving scores that are not merely incrementally better but categorically different from its predecessors.
Why Anthropic Chose Not to Release It
In an unusual move for a company that has steadily expanded public access to its AI models, Anthropic explicitly stated that it has no plans to make Mythos Preview generally available. The company's reasoning centers on the model's cybersecurity capabilities:
- The model can autonomously identify zero-day vulnerabilities in critical software
- It can develop working proof-of-concept exploits from those discoveries
- The speed and scale at which it operates could outpace current defensive measures if broadly available
- Responsible deployment requires limiting access to organizations that can use it defensively
This decision represents a significant precedent in the AI industry — a major lab voluntarily restricting access to a model not because it isn't good enough for users, but because it's too capable in specific domains.
Project Glasswing
The other half of the announcement was Project Glasswing, the initiative through which Anthropic provides Mythos Preview to selected organizations. The 12 founding partners include major technology companies (AWS, Apple, Google, Microsoft, NVIDIA), cybersecurity firms (CrowdStrike, Palo Alto Networks), financial institutions (JPMorgan Chase), and infrastructure organizations (Cisco, Broadcom, The Linux Foundation).
Anthropic reports that over 40 additional organizations with responsibility for critical software infrastructure have also been granted access, though this list has not been publicly disclosed.
What This Means for the AI Industry
The Mythos Preview announcement raises several important questions for the broader AI landscape:
For competitors: The benchmark results suggest that Anthropic has achieved a meaningful capability lead, at least in cybersecurity-relevant domains. How OpenAI, Google, and others respond — particularly regarding their own models' cybersecurity capabilities — remains to be seen.
For AI governance: The decision to voluntarily restrict a model's release sets an interesting precedent. Whether this will influence how regulators and policymakers think about AI deployment controls is an open question.
For users: The publicly released Claude Opus 4.7 (April 16, 2026) offers the improvements in reasoning and software engineering that many users are looking for, even without the specialized cybersecurity capabilities. Anthropic appears to be pursuing a bifurcated strategy: broad public models for general use, restricted models for high-stakes applications.
We will continue tracking all developments. Follow our timeline for the latest updates.


