Anthropic is testing the most powerful AI model it has ever built, and the world wasn’t supposed to know it yet.
A data leak reported by Fortune on Thursday revealed that the AI lab behind Claude has been training a new model called “Mythos,” which it describes internally as “by far the most powerful AI model we’ve ever developed.”
The model was discovered in a draft blog post left in an unsecured, publicly searchable data cache, along with nearly 3,000 other unpublished assets, according to cybersecurity researchers who reviewed the material.
Anthropic confirmed the model’s existence after Fortune’s inquiry, calling it “a step change” in AI performance and “the most capable we’ve built to date.” The company said it is being tested by “early access customers” and acknowledged that a “human error” in its content management system caused the leak.
The draft blog post introduced a new model tier called “Capybara”, described as larger and more capable than Anthropic’s existing Opus models, which were previously its most powerful.
“Compared to our previous best model, Claude Opus 4.6, Capybara scores dramatically higher in tests of software coding, academic reasoning, and cybersecurity, among others,” the draft said.
It is the cyber security dimension that matters the most to the crypto industry. The draft blog post said the model “poses unprecedented cybersecurity risks,” a framework that has direct implications for blockchain security, smart contract auditing, and the escalating arms race between attackers and defenders in DeFi.
This week alone, Ripple announced an AI-powered security overhaul for the XRP Ledger after an AI-assisted red team uncovered more than 10 vulnerabilities in its 13-year-old codebase. Ethereum launched a dedicated post-quantum security hub backed by eight years of research.
And the Resolv stablecoin lost its peg after an attacker exploited a minting contract without oracle checks and single-key access control, the kind of infrastructure flaw that more skilled AI tools could potentially identify before an attacker does, or exploit faster than defenders can react.
For the AI token market, the leak raises another question. Bittensor’s decentralized network recently released Covenant-72B, a model that competes with Meta’s Llama 2 70B, sparking a 90% rally in TAO and driving subnet tokens to a total market capitalization of $1.47 billion.
A “step change” from a centralized lab like Anthropic resets the benchmark that decentralized AI projects must match. The competitive gap between what a well-funded corporate lab can build and what a permissionless network can produce just widened.
Anthropic said it is “being deliberate” about the model’s release given its capabilities. The draft blog noted that the model is expensive to operate and not yet ready for general availability. The company removed public access to the data cache after Fortune contacted it.
The leak itself is its own cautionary tale. A company building what it describes as an AI model with unprecedented cybersecurity capabilities left the announcement of that model in an unsecured, publicly searchable data store due to human error. The irony needs no elaboration.



