I’ve been writing about AI for over a year now, and there’s still no such thing as a quiet week. There is always more to come. Sometimes it’s positive, often it’s worrying and sometimes it’s downright bizarre. This week is no exception, especially as broader geopolitical tensions shape and are shaped by AI in increasingly visible ways. We are on the brink of new models, new infrastructure and inevitably new concerns.
ICYMI: THE WEEK IN AI
This article is part of our ICYMI franchise where we round up the biggest stories of the week – this time in AI.
This week’s lead story captures something I’ve seen play out repeatedly over the past year: big, bold claims that meet real-world limits. Microsoft is now suggesting that Copilot be used for “entertainment purposes only,” which feels like a big shift in tone. Alongside that is a deep and fascinating New Yorker profile of Sam Altman, fresh insights into OpenAI’s revenue numbers, and new concerns about Anthropic’s latest model, Claude Mythos. Growing excitement, growing excitement and always the feeling that there is way too much AI news to absorb, which is exactly why this round-up exists. There’s also a quiz at the bottom to test your knowledge, so stay sharp.
The article continues below
Do you know Microsoft’s Copilot? The AI tool positioned as essential to the modern workplace and a flagship example of how AI can transform productivity? According to Microsoft’s official terms and conditions, it is “for entertainment purposes only”.
OpenAI, Google and Anthropic have similar disclaimers in their own terms. But what matters is the gap between how these tools are sold and what’s in the fine print. Microsoft wants companies to continue using Copilot. But the language shifts responsibility back to the user if something goes wrong.
This is a pattern we’ve seen play out in AI therapy, AI friendship, AI life coaching, and even AI romantic companions. AI tools can play certain roles very well, but the risk is yours. So the big question here is not whether AI will make mistakes or not, we know it will. It is about who is held responsible when it happens. And right now, AI companies are doing everything they can to make sure it’s not them.
ChatGPT maker OpenAI says it’s making a lot of money – does that mean the AI bubble won’t burst?

One of the biggest questions hanging over the entire AI industry right now is whether it actually makes money. The answer is yes, but maybe not in the way you might think. It’s less about people using ChatGPT for recipes or late night spirals and more about companies paying to integrate AI into their products and workflows.
But even if you don’t use it like that at work, it’s important. Because the income changes the process significantly. If companies can make serious money from AI, it will be harder to argue that this is a passing hype cycle or bubble that is about to burst any minute now. It also points to where things are heading, which is more of a focus on business customers. Which could potentially mean higher costs or tighter limits for regular users.
Iran threatens to bomb $30 billion Stargate AI data center backed by OpenAI, Nvidia and other tech giants

Reports suggest that Iranian officials have referred to technological infrastructure as a potential target in the event of escalation with the US and its allies. The biggest project attracting attention is Stargate, which is a major data center initiative in the United Arab Emirates backed by major tech players, including OpenAI. It is designed to provide massive amounts of the computing power needed to train and run advanced AI systems.
This is important because it shows us how dependent AI is on massive infrastructure that requires massive amounts of energy and stable geopolitical conditions to function. For regular users, it also shows that all the tools we rely on depend on that infrastructure. If it becomes too expensive, politically contentious or harmful to the environment, it could mean much higher costs, less access and slower progress.
More AI news you might have missed
Were you paying attention? Take our AI news quiz
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on YouTube and TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



