Empire of AI: The Book That Changes How You See the Future

Empire of AI: The Book That Changes How You See the Future

Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao | Penguin Press, 2025 | ★★★★½ (4.5/5)

A Friday in Las Vegas, and Nothing Was Ever the Same

It was a Friday. Sam Altman, CEO of OpenAI and the most celebrated face of the generative AI era, was in Las Vegas for Formula One’s first race there in a generation. He had just appeared at the APEC CEO Summit, where Laurene Powell Jobs asked him why he devoted his life to this work. He told the room it would be “the most transformative and beneficial technology humanity has yet invented.” He had spent months globe-trotting, being compared, without irony, to Taylor Swift.

Then he logged into a Google Meet. Four board members stared back at him. Ilya Sutskever, OpenAI’s chief scientist, delivered the message: Altman was fired. The announcement would go out momentarily.

“How can I help?” Altman asked.

That question, almost instinctive, almost surreal, is the kind of detail that makes Karen Hao’s Empire of AI feel less like a business book and more like a psychological novel. It is the best book written yet about the AI industry, not because it covers the most ground, but because it sees through the narrative we have been sold about that ground.

Hao is no outsider looking in. An award-winning investigative journalist who spent years covering OpenAI for MIT Technology Review, the Wall Street Journal, and The Atlantic, she conducted over 300 interviews with approximately 260 people for this book. More than 150 of those were with current or former OpenAI executives and employees. OpenAI and Altman declined to cooperate. That refusal hangs over every page, but it does not weaken the book. If anything, what Hao built without them is more damning precisely because it is so meticulously sourced.

This is not a tell-all. It is something more uncomfortable: a prism. Through the story of one company, Hao illuminates something vast about power, extraction, and the language used to disguise both.


What the Book Is Actually About

Empire of AI covers roughly a decade of OpenAI’s evolution, from the founding dinner at the Rosewood Hotel on Sand Hill Road in 2015, where Elon Musk arrived over an hour late, through the ChatGPT explosion in late 2022, the November 2023 board ouster and reinstatement of Altman, and into OpenAI’s positioning as what Hao calls, in the book’s most arresting metaphor, an empire.

The origin story is, by now, familiar in outline. Musk and Altman, along with Dario Amodei, Ilya Sutskever, and Greg Brockman, coalesced around a shared anxiety: if artificial general intelligence was coming regardless, better that someone with a conscience built it first. Altman had emailed Musk in May 2015: “If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.” The result was a nonprofit, structured, as Altman proposed, so that “the tech belongs to the world.”

The book traces how that aspiration transformed, incrementally and then all at once, into something else. The nonprofit begat a capped-profit subsidiary. The subsidiary swallowed the parent. A $1 billion deal with Microsoft became a $13 billion relationship of near-total dependence. ChatGPT launched in November 2022, became the fastest-growing consumer app in history, and changed the scale of everything. OpenAI’s latest funding round in 2024 closed at $6.6 billion, valuing the company at $157 billion. Investors demanded their money back if it did not convert to a for-profit within two years.

Somewhere along the way, “beneficial to all of humanity” became a sentence that could mean nearly anything. That is the point.


The Key Themes That Will Stay With You

1. The November Board Drama, and What It Really Reveals

Most readers will come to this book knowing the broad arc of the November 2023 crisis. They will leave understanding something far more precise: that the drama was not a glitch in OpenAI’s governance. It was governance operating exactly as it had always worked, which is to say, operating on the informal authority of one man.

Hao reconstructs the ouster with granular detail. Employees learned Altman was fired at the same moment the public did. Sutskever held an all-hands meeting two hours later, stiff and unrehearsed, and told confused employees to read the press release “maybe a few times.” The board’s stated reason, that Altman had not been “consistently candid,” tracked a pattern that Hao documents going back to his time at Loopt, his first startup: tiny dishonestries, what people around him called “paper cuts,” accumulating into an atmosphere of pervasive distrust. The board sided with him then, too.

The ouster lasted five days. Employees signed a letter threatening to leave if Altman was not reinstated. He came back. Helen Toner, one of the board members who voted to fire him, was gone. The structural lesson the company drew from the episode was not to improve transparency. It was to entrench tighter control.

2. The “Divine Right” of Founders

One of the book’s central arguments is that the mythology of tech founders, the idea that their vision is uniquely correct and uniquely necessary, functions as a governing ideology. Hao draws an analogy Altman himself almost invites: he once told an audience that his favorite book of the previous year was a collection of Napoleon’s quotes, praising the emperor’s “incredible understanding of human psychology” and his ability to build systems of control.

The story of how Altman rose is fascinating in ways that go beyond the AI industry. Paul Graham, his early mentor, once said: “You could parachute him into an island full of cannibals and come back in five years and he’d be the king.” Graham added, separately: “Sam is extremely good at becoming powerful.” People close to Altman describe a man who listens with extraordinary attention, who remembers every detail about you, and who uses that knowledge to figure out how to influence you. Ralston, who took over running Y Combinator after Altman, describes his fundraising as the ability to “tell a tale that you want to be part of.”

This is not a portrait of a villain. Hao is too careful for that. But it is a portrait of someone whose orientation toward power and toward the narrative that justifies that power has shaped an entire industry.

3. Science in Captivity

Chapter 7 of the book carries a title that deserves to be quoted directly as a concept: “Science in Captivity.” As OpenAI shifted toward commercialization, the relationship between its research mission and its financial imperatives grew more tangled. Researchers who wanted to study risks, publish findings, or slow down deployment found their work deprioritized, defunded, or filtered through communications teams.

The trust and safety team, numbering just over a dozen people during ChatGPT’s launch, scrambled to understand user behavior while their monitoring platforms crashed alongside the servers. A project called Fact Factory, designed to use GPT-4 to moderate its own outputs, was shelved because it cost too many computational resources. Hao documented how the launch flew through safety checks with unusual speed, driven by competitive pressure and the belief that even a flawed product was better than losing first-mover advantage.

This section will resonate with anyone who works in an institution where caution and speed are perpetually in tension.

4. Disaster Capitalism and the Formula for Empire

The book’s most damning theoretical claim is this: that OpenAI’s mission language, “ensuring AGI benefits all of humanity,” functions not as a sincere ethical commitment but as a remarkably flexible tool for centralizing resources. Hao identifies three ingredients in what she calls the formula for empire. The mission centralizes talent by rallying people around a grand, almost religious purpose. It centralizes capital by framing OpenAI’s growth as a necessary defense against authoritarian alternatives. And because “beneficial” and “AGI” remain undefined, the mission can be stretched to justify nearly any decision.

That third ingredient is the coldest part of the analysis. Altman himself called AGI “a ridiculous and meaningless term” just two days before his firing. It is, Hao argues, a term useful precisely because it is meaningless.

5. Plundered Earth and the People Who Paid the Real Costs

This is where the book departs most sharply from the typical Silicon Valley narrative, and where it matters most.

Hao spent significant time embedded in communities around the world to document the material costs of building the AI empire. Kenyan content moderators, paid under $2 per hour, were tasked with reading and labeling psychologically harmful material, including graphic violence and sexual abuse imagery, to make ChatGPT less toxic. Data centers gulp water from stressed local supplies. Artists, writers, and journalists found their life’s work scraped without consent into training datasets. The book documents a Stanford study confirming child sexual abuse material in one major training dataset.

These are not peripheral footnotes. Hao treats them as the actual ground floor of the empire, the laborers and communities whose dispossession enabled the accumulation at the top. The colonial parallel she draws is not rhetorical flourish. It is structural: a small group extracts enormous value from the many, frames that extraction as progress, and ensures the people bearing the costs have little say in how it is managed.

As someone who has followed the AI boom, I found these chapters the most difficult to read and the most important.

6. Gods, Demons, and the People Who Believed

Empire of AI is also, quietly, a book about true believers. Ilya Sutskever, who trained under Geoffrey Hinton and rose to become OpenAI’s chief scientist, is one of its most compelling figures. Hao describes a man who wore shirts with cuddly animals to the office, painted amateur canvases of flowers in the shape of OpenAI’s logo, and spoke about building AGI with a force of sincerity that was “endearing to some and off-putting to others.” He genuinely believed. He was also one of the four board members who voted to fire Altman. Five days later, he signed the letter asking for Altman’s return.

What happens to idealism inside an empire is one of the book’s quiet tragedies.


Strengths and Weaknesses

The strengths are considerable. Hao’s sourcing is extraordinary, and the narrative she builds from those 300-plus interviews has the lived texture of scenes that clearly happened rather than scenes assembled from press releases. The book’s global perspective, from Kigali to Nairobi to the halls of Microsoft in Redmond, is genuinely unusual in tech journalism and is the most important thing that distinguishes it from existing accounts of OpenAI.

The colonial empire framing is provocative and, on balance, earned. But it will frustrate readers looking for nuance about the genuine complexity of Altman and others. This is not a balanced corporate profile. Hao is clear about that in her author’s note: the book is meant to be “a prism through which to see far beyond this one company.” If you pick it up expecting a conventional business biography, you will find it pulls in a different direction.

The density of the book’s final third, covering the governance chaos of 2024 and OpenAI’s for-profit conversion, is occasionally hard going. The cast of characters grows large. A glossary or character list in the print edition would have helped.

These are minor complaints against a genuinely important work.


Who Should Read This Book

This book is for leaders, tech professionals, policy makers, ethicists, and anyone who wants to understand how idealism transforms into institutional power. If you have read Bad Blood by John Carreyrou, Walter Isaacson’s Elon Musk, or The Coming Wave by Mustafa Suleyman, you will find Empire of AI a sharper, more globally aware companion to all three. Fans of investigative journalism who want to understand how the AI industry actually works, rather than how it presents itself, will find this essential.

If you are looking for a purely celebratory account of generative AI’s promise, or a technical tutorial on how large language models work, this is not the book for you. Hao’s interest is in power, not in prompts.

You might also enjoy our reviews of The Coming Wave and Genius Makers by Cade Metz for related context on the AI industry’s deeper history, and our breakdown of Bad Blood for a comparative look at how ambitious narratives inside secretive organizations can overwhelm institutional safeguards.


Final Verdict: ★★★★½ (4.5/5)

Empire of AI is the most important book about the AI industry published in 2025. Not the most optimistic. Not the most technically sophisticated. But the most honest about the mechanisms through which a genuinely world-altering technology is being built, controlled, and sold. Karen Hao has spent seven years reporting on OpenAI, and it shows. She has produced something durable: a book that will matter more, not less, as the years pass and the empire either expands or falls.


Get Your Copy

If you want to understand the forces actually shaping our AI future, and what they mean for the rest of us, grab your copy of Empire of AI on Amazon here:

One of the most important books of 2025 and, I suspect, one of those books that ages into a primary document. If you are buying one book about AI this year, make it this one.


Frequently Asked Questions

Is Empire of AI biased against Sam Altman? It is critical of him, and it does not pretend to be neutral. But Hao is careful to distinguish between the man and the structure. Her sharpest arguments are about the mechanisms of power, not about Altman’s personal character. The book draws on over 300 interviews and she documents her methodology in detail. OpenAI and Altman declined to participate, which she discloses transparently.

What is the “empire of AI” metaphor? Hao argues that OpenAI’s structure, its mission language, its extraction of data and labor, its concentration of capital and talent, and its marginalizing of communities who bear the costs, parallels the structure of colonial empires. The metaphor is not purely rhetorical. She treats it as an analytical framework for understanding how power and resource accumulation actually work inside the company and the broader industry.

Do you need a technical background to read this book? No. Hao explains the relevant technical history clearly and without jargon. The book is primarily a political and human story, not a technical one.

Is Empire of AI worth reading in 2026? More than ever. OpenAI completed its transition to a for-profit structure, its board governance remains contested, and the global regulatory conversation about AI is heating up. The context Hao builds in this book is essential for understanding those developments.

How does this compare to other AI books? It is more globally focused and more structurally critical than Genius Makers by Cade Metz or The Coming Wave by Mustafa Suleyman. It is closer in spirit to investigative works like Bad Blood, with a scope that includes communities in Kenya, Rwanda, and Europe rather than staying inside Silicon Valley.

Did Karen Hao get access to OpenAI? No. OpenAI and Sam Altman chose not to cooperate with the book. Hao built her account from over 300 interviews with other sources, including more than 90 current or former OpenAI employees, as well as executives from Microsoft, Anthropic, Meta, Google, and DeepMind.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *