The trial between Elon Musk and OpenAI has delivered what trials of this kind always deliver: a lot of uncomfortable detail that the parties involved would have preferred to keep private.
The central question that has emerged is not really about contractual obligations or corporate governance. It is about trust. Specifically, whether Sam Altman, the CEO of OpenAI, can be trusted by the people who work with him, invest in him, and depend on his representations about the most consequential technology of the era.
The evidence has not been flattering. Multiple witnesses who have worked closely with Altman described him as conflict-averse to a fault, someone who tells people what they want to hear rather than what they need to know.
That is a personality trait, not a crime, but in the context of a company developing artificial general intelligence, it is a significant concern.
More damaging was the revelation that Altman's previous statements about not holding equity in OpenAI were untrue. His defence, that "everyone understands" what it means to be a passive investor in a venture capital fund, did not land well.
The gap between the public image of a selfless mission-driven leader and the reality of someone with a financial stake in the outcome is precisely the kind of discrepancy that erodes confidence.
The comparison with Musk is instructive. Both men have been accused of being economical with the truth. But their styles differ sharply. Musk is combative and confrontational, willing to fight publicly and absorb the reputational damage. Altman is smoother, more affable, preferring to deflect rather than engage. In a courtroom, the affable approach can look evasive.
The broader significance extends well beyond the two men. OpenAI is a privately held company developing technology that could reshape the global economy. The public has almost no insight into its internal decision-making, its safety practices, or its commercial commitments. In that context, trust in the leadership is not a nice-to-have. It is the only mechanism the outside world has for evaluating whether the company is behaving responsibly.
Related reading
- OpenAI trial's real revelation is how much AI's future was shaped by ego, fear and text messages
- OpenAI's phone ambitions make strategic sense and will almost certainly fail
- Altman vs Sutskever II: Do We Now Know What Ilya Saw?
The jury's verdict was still pending at the time of recording, and its consequences will ripple through the AI industry regardless of which way it falls.
The trial has made one thing clear: the AI industry's biggest trust deficit is not between humans and machines. It is between humans and the people building them.