The artificial intelligence industry has a creation myth. It goes roughly like this: a small group of brilliant people, aware that they are building something potentially dangerous, have taken it upon themselves to ensure the technology is developed safely. If they don't do it, someone else, probably China, will do it worse. The risks are real. The mission is necessary. Trust the founders.
Karen Hao, a journalist and author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, spent years pulling that myth apart. Speaking on Diary of a CEO, she laid out what she found underneath it.
The naming problem
The story starts, Hao argues, with a word. When a group of scientists gathered at Dartmouth University in 1956 to establish a new scientific discipline, they initially called it Automata Studies. The name was changed because the group worried it pegged the project too narrowly. John McCarthy renamed it artificial intelligence instead.
The problem, Hao told Diary of a CEO, is that intelligence has no scientific consensus definition. It has no goalposts. This turns out to be extremely useful if you are a company that wants to raise billions of dollars while continually redefining what success looks like. OpenAI, she notes, has offered different definitions of artificial general intelligence to Congress, to consumers, and to Microsoft. The ambiguity is not incidental. It is load-bearing.
How Sam Altman recruited Elon Musk
One of the more striking passages in the book concerns the founding of OpenAI and the relationship between Sam Altman and Elon Musk. In 2015, Altman wrote a blog post describing superintelligent machine intelligence as probably the greatest threat to humanity's continued existence. The language closely mirrored warnings Musk had been making publicly at the time, including a speech at MIT in which he compared developing AI to summoning a demon.
Hao told Diary of a CEO that documents from the subsequent lawsuit between the two men suggest Musk came to believe this mirroring was deliberate, that Altman had engineered his language to secure Musk as a co-founder. Whether that reading is correct or not, Musk donated significant money to OpenAI before leaving the organisation feeling, in his own account, that he had been manipulated.
The decision to make Altman rather than Musk the CEO of the for-profit entity was reportedly influenced by Greg Brockman, the company's then-chief technology officer, who concluded that Musk's unpredictability made him too risky a figure to give control over a powerful technology. Musk's departure set the course for what became a personal and legal confrontation with Altman that continues to define the industry's internal politics.
The firing that wasn't
Hao's account of Sam Altman's removal from OpenAI's board in late 2023, and his reinstatement days later, centres on Ilya Sutskever, the company's co-founder and then-chief scientist. Sutskever had two priorities, she argues: achieving AGI, and doing so safely. He came to believe Altman was undermining both, by pitting teams against each other and creating an internal environment built on competition rather than collaboration.
The chaos that followed the release of ChatGPT, with servers crashing and the company essentially unprepared for the scale of its own success, deepened those concerns. Sutskever approached independent board member Helen Toner, then other senior figures including Mira Murati, the chief technology officer. The case built against Altman included the revelation that the OpenAI startup fund, which had been presented as a company vehicle, was in fact Altman's personal fund.
The board moved quickly, without informing Microsoft, OpenAI's largest investor. The backlash was immediate. Altman was reinstated. Sutskever never returned. Murati later left too. Hao's point is less about the specific grievances than what they reveal about the governance structure: a company claiming to act in the public interest, making consequential decisions behind closed doors, accountable to almost no one outside itself.
The empire framework
The book's central argument is that the major AI companies operate according to an imperial logic. They claim resources that are not theirs, including personal data, the intellectual property of artists and writers, and land for data centre construction. They contract hundreds of thousands of workers globally under conditions Hao describes as inhumane. They monopolise the scientific research agenda, funnelling money to favoured areas and, in documented cases, pushing out researchers whose findings are inconvenient.
The example she gives to Diary of a CEO is Dr Timnit Gebru, fired by Google after co-authoring a critical paper on large language models. Hao herself was served legal papers requesting all communications that might have involved Musk, after civil society groups she had spoken to were suspected, incorrectly, of being funded by him to block OpenAI's conversion to a for-profit entity.
The narrative these companies construct, she argues, is the final component of the imperial framework: a promise of utopia if the right people are allowed to take what they need, combined with a warning of catastrophe if the wrong people, the rival empire, gets there first.
The cost on the ground
The section of Hao's argument that lands hardest is about data annotation, the human labour that trains AI systems. Hundreds of thousands of workers worldwide, many of them highly educated professionals who cannot find work elsewhere, are employed by third-party firms to label data at speed, competing with each other for tasks that open and close without warning.
One worker she interviewed described screaming at her child for the interruption while racing to complete a project, and feeling that she had become a monster. Award-winning directors in Hollywood are doing this work quietly to make a living. The jobs that AI is said to be creating, Hao notes, are often these: lower-paid, lower-status, and designed to train the next round of automation that will eliminate them too.
Meanwhile, data centres the size of Central Park are being built in communities that were not consulted, drawing power equivalent to significant fractions of major cities, and in the case of a facility in Memphis, powered by dozens of methane gas turbines installed without local knowledge.
The alternative
Hao is not an AI abolitionist. Her book, she told Diary of a CEO, distinguishes between different kinds of AI development. DeepMind's AlphaFold, which uses small curated datasets to model protein structures and has genuine medical utility, she describes as a "bicycle of AI," something that provides real benefit without the resource extraction and labour exploitation of the frontier model race. The capabilities, she argues, do not require the empire.
The question she leaves open is whether the empire requires the capabilities, or whether the capabilities are simply the justification the empire needed.