The second week of the trial between Elon Musk and OpenAI has produced something more revealing than legal arguments: a granular, real-time record of how the most consequential artificial intelligence company in the world was built, nearly destroyed, and rebuilt, all mediated through text messages, journal entries and internal memos that their authors never expected to see daylight.
The discovery process has surfaced material that reads less like corporate governance and more like a group chat among friends who stopped trusting each other. Greg Brockman's digital journal contains passages in which he contemplated actions he described as "morally bankrupt." Texts between Sam Altman and Mira Murati during Altman's brief firing in November 2023 show the two exchanging blunt, almost absurdly direct messages as the company's leadership imploded in real time, with Murati at one point describing the situation as "directionally very bad."
Satya Nadella, the chief executive of Microsoft and arguably the most consequential external figure in the saga, appears in the record as a conspicuous absence. Despite being the head of OpenAI's largest investor and compute partner, Nadella has left almost no paper trail, communicating through intermediaries and responding to direct messages only after long delays, if at all. The contrast with the voluminous, often incautious communications of OpenAI's founders is striking and presumably deliberate.
The testimony has also filled in details about the original split between Musk and OpenAI. Musk demanded majority control of a proposed for-profit subsidiary. When the other co-founders refused and proposed equal equity stakes, Musk left, making aggressive threats about starting a competitor and claiming that Demis Hassabis, the head of Google DeepMind, would "ruin the world" if OpenAI could not stop him.
Shivon Zilis, a former OpenAI board member and the mother of four of Musk's children, testified that after Musk departed, she passed intelligence about OpenAI's activities back to him, a disclosure that raises questions about the integrity of the board's governance during a critical period. Her testimony was notable for another reason: when unable to recall specific details, she told the court "it's not in my neurons," a phrase that drew attention for its unusual phrasing.
The broader picture that emerges is of an organisation whose trajectory was shaped as much by personal relationships, rivalries and insecurities as by any coherent strategy. The board that fired Altman in November 2023 never publicly explained its reasoning. Murati, who was briefly made chief executive, was simultaneously asked to collect evidence of Altman's management failings while serving as the primary communication channel between Altman and the board that had just removed him. The entire episode was resolved not through governance but through a cascade of text messages, phone calls and threats to quit.
Related reading
- OpenAI's phone ambitions make strategic sense and will almost certainly fail
- Brockman tells OpenAI trial Musk quit the company when he could not have control, and dismissed early ChatGPT…
- AI can write code and prove theorems. It still can't book me a restaurant
For the legal profession, the trial offers a preview of what discovery will look like in the AI era. Executives who feed privileged legal advice into ChatGPT or Claude may be waiving attorney-client privilege. Every text, every journal entry, every voice memo transcribed by an AI assistant is potentially discoverable. The volume of material available to litigators is about to expand by orders of magnitude.
Musk is seeking up to $134 billion in damages. The trial continues through late May, with testimony expected from Nadella and OpenAI co-founder Ilya Sutskever. Whatever the jury decides, the record being created in Oakland will shape how the founding of OpenAI is understood for decades.