The six largest American technology companies are on course to spend more than $750 billion on artificial intelligence this year. That figure sits alongside a parallel debate about what, exactly, all that money will produce, and on what timeline.
Arvind Narayanan, a professor of computer science at Princeton and director of its Centre for Information Technology Policy, offers a framing that cuts against both the utopian and catastrophist positions.
AI is a general-purpose technology, he argues, and it will integrate into society the way electricity and the internet did: gradually, unevenly, and with consequences that only become legible well after the investment has been made.
The reliability problem
One reason the integration will be slower than headline investment figures suggest is reliability. Capability benchmarks, which tend to drive coverage and valuation, measure what an AI system can do at its best. They do not measure what it does consistently.
For customer service, legal work, or medical advice, consistency is the product. An AI system that answers correctly 90% of the time is not a customer service agent. It is a liability.
That liability question has already surfaced in practice. Air Canada's AI-powered customer service chatbot generated legal exposure for the airline after providing incorrect information to a passenger. Regulatory frameworks in healthcare go further, preventing AI from making autonomous clinical decisions precisely because the cost of a single failure is too high.
AI washing and the short-term view
Some companies have used AI adoption as cover for cost-cutting, citing the technology when announcing layoffs. Drew Matus, chief market strategist at MetLife, pushes back on the logic.
Knowledge workers, he argues, are more likely to find their work expanded by AI than replaced by it. The technology generates new questions as fast as it answers old ones, increasing demand for people capable of interrogating the results.
The number of job postings for software engineers has continued to rise through the period of heaviest AI adoption in the sector. That data point does not settle the long-term argument, but it complicates the simpler versions of the displacement story.
What AI cannot do
Narayanan draws one firm boundary: AI cannot predict the future. The limitation is not computational. It is epistemic. The data available about future events is genuinely thin, and extrapolating from present patterns has always produced poor results in domains like geopolitics and military strategy. That constraint will not be engineered away.
Geoffrey Hinton, whose Nobel Prize-winning work underpins much of what modern AI can do, is less sanguine about the longer arc. He has compared the trajectory of AI development to an approaching force that will arrive smarter than humans within a decade. Narayanan treats that framing as one specific risk among many, each of which is better addressed with specific solutions than with a single theory of alignment.
The honest answer, as Narayanan concedes, is that nobody knows how this ends. Not the optimists, not the pessimists, and not the AI systems themselves.