OpenAI says it wants to support sovereign AI. But it’s not doing so out of the kindness of its heart |
|
|
The 2025 Fortune 500 hasn’t dropped yet, but the story is already unfolding—as America’s biggest companies grapple with AI breakthroughs, DEI rollbacks, leadership shakeups, tariffs, and return-to-office battles. Every year, the list is a moment of reckoning—and Fortune tracks what happens next, company by company.
Subscribe now to unlock full access when the list lands June 2. |
|
|
Hello and welcome to Eye on AI. In this edition…Yoshua Bengio’s new AI safety nonprofit…Meta seeks to automate ad creation and targeting…Snitching AI models…and a deep dive on the energy consumption of AI.
I spent last week in Kuala Lumpur, Malaysia, at the Fortune ASEAN-GCC Economic Forum, where I moderated two of the many on-stage discussions that touched on AI. It was clear from the conference that leaders in Southeast Asia and the Gulf are desperate to ensure their countries benefit from the AI revolution. But they are also concerned about “AI Sovereignty” and want to control their own destiny when it comes to AI technology. They want to control key parts of the AI tech stack—from data centers to data to AI models and applications—so that they are not wholly dependent on technology being created in the U.S. or China.
This is particularly the case with AI, because while no tech is neutral, AI—especially large language models—embody particular values and cultural norms fairly explicitly. Leaders in these regions worry their own values and cultures won’t be represented in these models unless they train their own versions. They are also wary of the rhetoric emanating from Washington, D.C., that would force them to choose between the U.S. and China when it comes to AI models, applications, and infrastructure.
Malaysia’s Prime Minister Anwar Ibrahim has scrupulously avoided picking sides, in the past expressing a desire to be seen as a neutral territory for U.S. and Chinese tech companies. At the Fortune conference, he answered a question about Washington’s push to force countries such as Malaysia into its technological orbit alone, saying that China was an important neighbor while also noting that the U.S. is Malaysia’s No. 1 investor as well as a key trading partner. “We have to navigate [geopolitics] as a global strategy, not purely dictated by national or regional interests,” he said, somewhat cryptically.
AI Sovereignty will be difficult for many countries to achieve But speakers on one of the panels I moderated at the conference also made it clear that achieving AI sovereignty was not going to be easy for most countries. Kiril Evtimov, the chief technology officer at G42, the UAE AI company that has emerged as an important player both regionally and increasingly globally, said that few countries could afford to build their own AI models and also maintain the vast data centers needed to support training and running the most advanced AI models. He said most nations would have to pick which parts of the technology stack that they could actually afford to own. For many, it might come down to relying on open-source models for specific use cases where they didn’t want to depend on models from Western technology vendors, such as helping to power government services. “Technically, this is probably as sovereign as it will get,” he said.
Also, on the panel was Jason Kwon, OpenAI’s chief strategy officer, who spoke about the company’s recently announced “AI for Countries” program. Sitting within its Project Stargate effort to build colossal data centers worldwide, the program offers a way for OpenAI to partner with national governments, allowing them to tap OpenAI’s expertise in building data centers to train and host cutting edge AI models.
But what would those countries offer in exchange? Well, money, for one thing. The first partner in the AI for Countries program is the UAE, which has committed to investing billions of dollars to build a 1 gigawatt Stargate data center in Abu Dhabi, with the first 200 megawatt portion of this expected to go live next year. The UAE has also agreed, as part of this effort, to invest additional billions into the U.S.-based Stargate datacenters OpenAI is creating. (G42 is a partner in this project, as are Oracle, Nvidia, Cisco, and SoftBank.)
In exchange for this investment, the UAE is getting help deploying OpenAI’s software throughout the government, as well as in key sectors such as energy, healthcare, education, and transportation. What’s more, every UAE citizen is getting free access to OpenAI’s normally subscription-based ChatGPT Plus service.
OpenAI says it will “co-develop” its AI for Countries Programs For those concerned that depending so heavily on a single U.S.-based tech company might undermine the idea of AI sovereignty, OpenAI sought to make clear that the version of ChatGPT it makes available will be tailored to the needs of each partner country. The company wrote in its blog post announcing the AI for Countries program: “This will be AI of, by, and for the needs of each particular country, localized in their language and for their culture and respecting future global standards.” OpenAI is also agreeing to help make investments in the local AI startup ecosystem alongside local venture capital investors.
I asked Kwon how countries that are not as wealthy as the UAE might be able to take advantage of OpenAI’s AI for Countries program if they didn’t have billions to invest in building a Stargate-size data center in their own country, let alone also helping to fund data centers in the U.S. Kwon answered that the program would be “co-developed” with each partner. “Because we recognise each country is going to be different in terms of its needs and what it’s capable of doing and what its citizens are going to require,” he said.
He suggested that if a country couldn’t directly contribute funds, it might be able to contribute something else—such as data, which could help make AI models that better understand local languages and culture. “It’s not just about having the capital,” he said. He also suggested that countries could contribute through AI literacy, training, or educational efforts and also through helping local businesses collaborate with OpenAI.
Kwon’s answer left me wondering how national governments and their citizens would feel about this kind of exchange—trading valuable or culturally-sensitive data, for instance, in order to get access to OpenAI’s latest tech. Would they ultimately come to see it as a Faustian bargain? In many ways, countries still face the dilemma G42’s Evitmov flicked at: They can have access to the most advanced AI capabilities or they can have AI sovereignty. But they may not be able to have both.
With that, here’s more AI news.
Jeremy Kahn jeremy.kahn@fortune.com @jeremyakahn
Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here.
The new Fortune 500 ranking is here In total, Fortune 500 companies represent two-thirds of U.S. GDP with $19.9 trillion in revenues, and they employ 31 million people worldwide. Last year, they combined to earn $1.87 trillion in profits, up 10% from last year—and a record in dollar terms. View the full list, read a longer overview of how it shook out this year, and learn more about the companies via the stories below.
• A passion for music brought Jennifer Witz to the top spot at satellite radio staple SiriusXM. Now she’s tasked with ushering it into a new era dominated by podcasts and subscription services. Read more • IBM was once the face of technological innovation, but the company has struggled to keep up with the speed of Silicon Valley. Can a bold AI strategy and a fast-moving CEO change its trajectory? Read more • This year, Alphabet became the first company on the Fortune 500 to surpass $100 billion in profits. Take an inside look at which industries, and companies, earned the most profits on this year’s list. Read more • UnitedHealth Group abruptly brought back former CEO Stephen Hemsley in mid-May amid a wave of legal investigations and intense stock losses. How can the insurer get back on its feet? Read more • Keurig Dr. Pepper CEO Tim Cofer has made Dr. Pepper cool again and brought a new generation of products to the company. Now, the little-known industry veteran has his eyes set on Coke-and-Pepsi levels of profitability. Read more • NRG Energy is the top-performing stock in the S&P 500 this year, gaining 68% on the back of big acquisitions and a bet on data centers. In his own words, CEO Larry Coben explains the company’s success. Read more
|
|
| AI agents can reimagine the future of work AI is reshaping work -- faster than ever. Discover how AI agents redefine workforce strategy, business models and competitive advantage. Are you ready? Learn more
|
|
|
U.S. FDA says it is using an AI tool to speed its work. The drug regulator said it was using a generative AI tool called Elsa to improve efficiency in tasks like scientific reviews, clinical protocol evaluations, and identifying key inspection targets, Reuters reported. Elsa helps FDA staff by reading, summarizing, and comparing documents, including “adverse event reports”—which drug companies compile on cases in which patients are suspected of having a bad reaction to a drug or treatment—and drug packaging inserts. The agency said the tool is designed to keep sensitive data secure and was not trained on industry submissions, but it did not reveal exactly how Elsa has been designed and trained. The FDA has said it plans to fully integrate AI across its operations by June 30.
Meta plans to use AI to fully automate ad creation on its platforms. The social media giant said it would use AI to fully automate both ad creation and targeting by the end of 2026, the Wall Street Journal reported. Under the plan, brands will be able to input a product image and budget, then have AI generate the complete ad—including visuals, text, and audience targeting—across Facebook and Instagram. The AI-push is central to Meta CEO Mark Zuckerberg’s vision of using AI to boost ad revenue, especially by enabling many smaller businesses that lack big creative budgets, to churn out high-quality, precisely targeted ads.
AI ‘godfather’ Yoshua Bengio launches new AI safety nonprofit. Called LawZero, the new nonprofit’s mission is to develop “honest” AI systems that will aim to detect and block harmful behavior in other autonomous agents. The organization is already at work on a “Scientist AI” that can monitor other AI agents, acting as “psychologist,” according to Bengio, and helping to steer other AI agents away from harmful actions or alerting users to potential dangers. LawZero is starting with $30 million in funding from initial donors that include the Future of Life Institute, Skype cofounder Jaan Tallinn, and former Google CEO Eric Schmidt’s research foundation Schmidt Sciences. Read more from The Guardian here.
AI could save every U.K. civil servant two weeks per year. That’s according to a pilot study by the British government in which 20,000 civil servants used generative AI tools, including Microsoft 365 Copilot. The study found that each government employee saved about 26 minutes each day, equating to two weeks over the course of a year. The tools assisted with tasks like drafting documents, summarizing emails, and updating records. The success of the pilot paves the way for a wider roll-out of generative AI across government. You can read the U.K. government’s blog on the pilot here.
British government effort to rewrite copyright laws defeated in House of Lords. The bill was shot down by a vote of 242 to 116, marking the fourth time that the upper house of the Parliament has rejected a version of the government’s proposed Data (Use and Access) Bill. The Lords want stronger protections for artists and other copyright holders. The government is proposing allowing AI companies to train on copyrighted material, unless rights holders specifically opt out. The latest version of the bill also included a provision mandating that AI companies disclose what copyright works they’ve trained on, which the government had hoped would win over critics. But that was not enough to convince opponents, who do not want to see Britain’s copyright protections eroded. You can read more from the BBC here.
|
|
|
It turns out almost all AI models will turn whistleblower. So, last week, my colleague Sharon Goldman chronicled disclosures from Anthropic that its new Claude 4 Opus model would, in evaluation tests, often resort to blackmail to keep itself from being shut down. In other cases, Anthropic said the model would try to act as a whistleblower if it discovered the company deploying it was acting unethically. This included trying to blow the whistle to various federal regulators, as well as drafting emails to investigative journalists at news organizations such as ProPublica. Well, it turns out this whistleblowing behavior is not confined to Claude 4 Opus.
A developer named Theo Brown created a benchmark called SnitchBench to see if other models would also turn whistleblower if presented with the same scenario used to test Claude 4 Opus. And it turned out that a lot of models exhibited similar behavior including Grok 3 Mini, Claude 4 Sonnet, and Gemini 2.0 Flash, which contacted government authorities 100% of the time, and also sent emails to the media some of the time (Claude 4 Opus does seem to be an outlier in its propensity to tip off reporters). Google’s Gemini 2.5 Pro, OpenAI’s o4-mini and Alibaba’s open-source Qwen 3 32B, also contacted authorities frequently. AI expert Simon Willison wrote more about Brown’s snitching benchmark on his blog here and also found that DeepSeek’s R1 model also tries to tip off the Feds and the media.
I guess the bottom line is that companies deploying AI will need to worry about getting ratted out if they behave unethically.
|
|
|
June 9-13: WWDC, Cupertino, Calif.
June 11-14: Viva Technology, Paris
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Sept. 17-18: Meta Connect
|
|
|
How bad is AI’s energy usage and carbon footprint? Pretty bad, and getting worse. But exactly how bad is almost impossible to determine without much more disclosure from the technology companies building AI models and hosting them in data centers worldwide. That was the conclusion of a thorough investigation into the conundrum of AI’s energy usage by MIT Technology Review’s James O’Donnell and Casey Crownhart. Among the most striking stats the story highlights are new projections from Lawrence Berkeley National Laboratory published in December that forecast that by 2028, more than half of all electricity going into data centers will be used for AI. At this point, AI alone will be consuming as much power annually as more than one fifth of all U.S. households. The story makes clear the urgent need for standardized reporting of AI energy consumption and the need for more sustainable AI development practices to mitigate the potentially disastrous environmental consequences of the technology. The story is eye-opening and worth checking out.
|
|
|
Thanks for reading. If you liked this email, pay it forward. Share it with someone you know. Did someone share this with you? Sign up here. For previous editions, click here. To view all of Fortune's newsletters on the latest in business, go here.
|
|
|
|