- An experimental AI agent unexpectedly tried to mine cryptocurrency during a training run
- The AI was only discovered after triggering security alerts on its servers
- Researchers say the behavior highlights new security challenges as AI agents gain more autonomy
AI models can surprise developers; that’s part of the point. But a group of researchers found a disturbing surprise when a training session for an experimental AI agent revealed that it was trying to divert computer resources towards cryptocurrency mining and to smuggle them to a remote server, despite not being asked to do anything of the sort.
Researchers working with Alibaba explained in a new paper that the model, called Rom, was designed to tackle complex coding challenges by interacting directly with software tools. It can issue terminal commands and navigate digital environments like an operator itself. But security alerts from Alibaba Cloud infrastructure alerted the team to what looked like a cyber security breach. It turned out that the activity came from the AI agent itself.
Rome was trained using reinforcement learning, which “rewards” an AI agent for actions that move it closer to its goals and discourages actions that lead to failure. Reinforcement learning often produces creative solutions. Sometimes these solutions look strange to human observers.
The article continues below
Somehow the AI model was generating commands that didn’t seem to relate to the programming tasks it had been assigned. Instead, the agent tried to divert graphics processing unit resources towards cryptocurrency mining. GPUs are well-suited to the task because they excel at parallel computation. The same hardware that powers AI training can also be used to mine digital currencies.
Rome had apparently discovered that the resources available in its environment could serve that purpose. The unsupervised AI wandered into the crypto mines. But the experiment took an even more bizarre turn when investigators noticed that the AI agent had created a reverse SSH tunnel to a remote server, basically a secret passage that avoids typical firewall protections. It is a technique often used by both system administrators to manage remote machines and in certain types of cyber attacks.
The model had never been instructed to make such a connection. Researchers say the behavior occurred spontaneously. The agent was simply experimenting with the options available to it.
Trickster AI
A typical AI agent can collect information from multiple sources, analyze it and generate reports without constant human supervision. Developers hope that such systems will eventually be widely used for research, programming or data analysis. But the same properties that make agents powerful also make them unpredictable. This is why people are interested in what OpenClaw can do or what is posted on Moltbook.
When a system is free to explore a computing environment, it can discover actions that technically achieve its goals but do not match the intentions of its creator. Rome is not sentient and cannot “try” to break rules in a human sense, but this is what the model’s behavior looked like.
Once the unusual activity was identified, the research team introduced additional security measures to prevent it from happening, such as tighter restrictions on network connections and tighter limits on how the agent could access hardware resources. They also refined the training environment so that the agent’s exploration remained focused on relevant programming activities rather than wandering into cryptomining potential.
And while change is common in AI development, the incident illustrates both the potential and the danger of AI agents. It’s a whimsical anecdote, but it touches on a serious topic in AI research. As systems gain greater autonomy, they interact with real infrastructure and participate in ways that mimic human behavior, thus leading to new security concerns.
Even when the consequences are small, unexpected behavior can reveal important vulnerabilities. In a larger or more sensitive environment, what Rome did could have been dangerous. Although AI agents are rolling out more widely than ever before, they need better security systems or it won’t just be a secret crypto mine that passes under our radar.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



