State of AI in 2026: Why the race is faster, messier, and more open than it looks
A wide-ranging discussion on the Lex Fridman Podcast suggests the artificial intelligence industry is being shaped less by hidden breakthroughs and more by access to computing power, organisational culture and the ability to turn powerful models into usable products.
The AI race is not about ideas anymore
On the latest episode of the Lex Fridman Podcast, Lex Fridman is joined by Nathan Lambert of the Allen Institute for AI and Sebastian Raschka, author of Build a Large Language Model (From Scratch).
Their core claim is blunt: the AI race is no longer being won by clever new ideas. Those diffuse quickly. What differentiates winners is access to compute, budgets, organisational focus, and the ability to execute relentlessly.
Researchers move between labs. Architectures converge. What does not equalise is who can afford to train, serve, and iterate on massive models at scale.
China’s open-weight moment is real
The conversation repeatedly returns to China. The “DeepSeek moment” in early 2025, when the Chinese lab released a near–state-of-the-art open-weight model at dramatically lower cost, changed the global dynamic.
Since then, DeepSeek has been joined by a fast-growing field of Chinese competitors such as Zhipu AI, MiniMax, and Moonshot AI. Their models are often larger, more open, and released under friendlier licenses than Western equivalents.
The implication is uncomfortable for US companies. Open-weight Chinese models are becoming good enough to threaten American business models, especially if they remain cheap, modifiable, and unrestricted.
Scaling laws are not dead, just expensive
Despite regular declarations that scaling is over, Lambert and Raschka argue the opposite. The relationship between compute and performance has held for more than a decade and across 13 orders of magnitude.
What has changed is the cost curve. Training a model might cost tens of millions. Serving it to millions of users can cost billions.
That reality is pushing labs to focus on post-training, reinforcement learning with verifiable rewards, and inference-time scaling. The goal is to extract more value from models without endlessly increasing parameter counts.
Coding is the first true killer app
If there is one area where large language models already feel transformative, it is software development.
Tools built on models like Claude, GPT-4-class systems, and specialised coding agents are changing how programmers work. The shift is from writing lines of code to specifying intent, reviewing output, and steering systems.
This does not mean programmers disappear. It means leverage increases sharply for those who know how to direct models, debug failures, and design systems. Coding becomes less about syntax and more about systems thinking.
Open vs closed models a real fault line
The episode highlights a growing split. Closed frontier labs optimise for peak performance and enterprise revenue. Open-weight ecosystems optimise for experimentation, trust, and adaptability.
Chinese companies are leaning hard into openness, partly for strategic influence and partly because they lack a clear monetisation path. US labs are more cautious, constrained by legal risk and commercial pressure.
Long term, consolidation looks inevitable. Training frontier models is too expensive for dozens of independent players to survive indefinitely.
AGI is less useful than people think
On timelines to AGI, the guests are sceptical of clean milestones. AI capabilities are jagged. Models will be superhuman at some tasks and frustratingly weak at others.
Rather than asking when AGI arrives, the more relevant question is when AI delivers an obvious, sustained economic impact. That moment has not fully arrived yet.
For now, AI remains a powerful tool, not an autonomous replacement for human agency.
The human bottleneck remains
Perhaps the most striking takeaway is cultural, not technical. Progress depends on people willing to work extreme hours, tolerate uncertainty, and make big bets with incomplete information.
That intensity is driving breakthroughs, but also burnout, hype cycles, and bubbles. The next few years will test which organisations can convert frenetic progress into durable value.
AI in 2026 is not slowing down. But it is growing up.