OpenAI CEO Sam Altman said U.S. policymakers must act now to prepare for advanced artificial intelligence, warning that the technology is moving from theory to everyday economic use.
In an interview with Axios, Altman said AI systems are already handling coding and research tasks that once required teams of programmers. Newer models will go further, he said, helping scientists make big discoveries and allowing individuals to do the work of entire groups.
That shift is already visible in cybersecurity, where some industry leaders say artificial intelligence is tipping the balance toward attackers.
For example, Charles Guillemet, chief technology officer at hardware wallet maker Ledger, told CoinDesk that AI tools are lowering the cost and skills needed to find and exploit software bugs. Tasks that once took months, such as reverse-engineering code or concatenating multiple vulnerabilities, can now be done in seconds with the right prompts.
The crypto industry saw more than $1.4 billion in assets stolen or lost in attacks last year. That number could keep growing, Guillemet suggested. Furthermore, developers increasingly rely on AI-generated code, which can potentially introduce new bugs at scale.
The answer, he said, will require stronger defenses such as mathematically verified code, hardware devices that keep private keys offline and a broader recognition that systems can fail.
AI in cyber, biosecurity
While Altman noted that artificial intelligence could accelerate drug discovery or materials science, he also noted that it could also enable more powerful cyberattacks and lower the barrier to harmful biological research. Such threats can emerge within a year, making coordination across government, technology companies and security groups urgent.
“We’re not that far off from a world where there are incredibly capable open source models that are very good for biology,” he said. “The need for society to be resilient against terrorist groups using these models to try to create new pathogens is no longer a theoretical thing.”
Another example he suggested was a “world-shaking cyber attack” that could occur as early as this year. Avoiding that, he said, would require “an enormous amount of work.”
He formulated OpenAI’s policy ideas as a starting point, with the aim of sparking debate about how to manage systems that learn quickly and act across many fields. Using AI to help defend against these potential attacks, he said, is important.
Regarding the potential nationalization of OpenAI, Altman said the case against it hinges on the need for the United States to achieve “superintelligence” before its rivals do.
“The biggest case against nationalization would be that we need the United States to succeed in building superintelligence in a way that is consistent with the democratic values of the United States before anyone else does,” he said. “It probably wouldn’t work as a government project, I think that’s a sad thing.”
Still, Altman said he believes companies involved in artificial intelligence must work closely with the U.S. government.
Given his role at OpenAI, Altman also has a financial interest in how the sector develops. This position may shape how he articulates both the urgency of regulation and the role of private companies like OpenAI in managing new risks that could affect the company’s competitive position.
AI as a utility
Energy is one area where he sees rapid progress because greater processing power capacity can keep costs down as AI demand grows.
Altman also pointed to early signs of job shifts. A programmer in 2026, he said, already works differently than a year earlier.
AI will become a kind of utility, like electricity, embedded across devices, while the price of basic intelligence falls and top systems remain expensive.
“You want this personal super assistant running in the cloud,” Altman said. “If you use it a lot or use it at a high level of intelligence, you’ll have a higher bill a month, and if you use it less, you’ll have a lower bill.”
It’s “incredibly important that the people building AI are reliable people of high integrity.”



