As AI agents scale in crypto, researchers warn of critical security gap

The cryptocurrency industry is moving toward a future where AI agents handle everything from booking flights to executing trades and making payments, but new research suggests the infrastructure supporting this shift may not be secure.

McKinsey recently predicted that AI agents could mediate $3-5 trillion of global consumer commerce by 2030.

Coinbase founder Brian Armstrong said on X that “very soon” there will be more AI agents than humans conducting transactions on the Internet. Binance founder Changpeng Zhao was bolder, predicting that agents would make a million times more payments than people, all in crypto.

But a group of security academics and crypto-researchers has published a paper explaining that a largely overlooked piece of AI infrastructure is already being used to steal credentials and even drain crypto wallets.

The authors of the articles are researchers affiliated with the University of California, Santa Barbara, the University of California, San Diego, the blockchain company Fuzzland, and World Liberty Financial.

Powerful attack points

The team found that so-called “LLM routers,” or services that sit between users and AI models, can act as a powerful attack point exploited by malicious actors. These routers are designed to forward requests to models like OpenAI or Anthropic, but they also have full access to everything that passes through them, including sensitive data.

“LLM agents have moved beyond conversational assistants to systems that book flights, execute code, and manage infrastructure on behalf of users,” the researchers wrote, highlighting how quickly these tools are taking on real-world financial and operational tasks.

The LLM routers, or attack points, leave users extremely vulnerable as they assume they are interacting directly with a reputable AI model such as OpenAI, Grok or others, when in fact many requests pass through intermediate services that can see and modify that data, the researchers said.

According to one of the researchers, Chaofan Shou, the problem is no longer theoretical. He wrote on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds. One emptied our client’s wallet of $500,000. We also managed to poison routers to forward traffic to us. Within hours we can directly take over ~400 hosts.”

“A malicious router can replace a benign command with an attacker-controlled one or silently exfiltrate all credentials passing through it,” the researchers wrote.

The researchers said that because these systems can operate autonomously, including frequently approving and executing actions without human review, a single changed instruction can immediately compromise systems or assets.

For crypto users, the implications are serious, as private keys, API credentials, and wallet access tokens often pass through these systems in plain text. The researchers found several cases where routers simply collected these secrets, the paper reveals. In one case, a test Ethereum wallet was drained after its private key was exposed.

“Once exposed, credentials such as private keys can be copied and reused without the user’s knowledge,” noted the authors of the paper.

Cascading risks

The team also demonstrated how easy it is to expand the attack. By “poisoning” parts of the router’s ecosystem, essentially tricking services into forwarding traffic, they were able to observe and potentially control hundreds of downstream systems within hours.

“A single malicious router in the chain is enough to compromise the entire system,” the researchers wrote, emphasizing what they describe as a problem with the weakest link.

This suggests a cascading risk that even if a user trusts their AI provider, the infrastructure in between may not be trustworthy, they stated in their paper.

That creates a potential mismatch as industry leaders increasingly predict that AI agents will handle a growing share of crypto activity, while the underlying infrastructure still lacks guarantees that output has not been tampered with, they added.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top