Apple is planning to let iPhone users choose which artificial intelligence model powers Siri, Writing Tools and Image Playground, opening its most important software platform to third-party providers, including Google's Gemini and Anthropic's Claude.
The feature, internally called Extensions, is slated for iOS 27 this autumn and would allow any AI provider that builds support into its App Store app to plug directly into Apple's system-level features, according to Bloomberg.
Users would be able to set a preferred model in Settings and even assign distinct Siri voices depending on which provider is answering, making it clear whether Apple's own system or an external chatbot is handling a query.
The change could be transformational for the iPhone because it would turn the device from a closed assistant ecosystem into an open routing layer for the most powerful AI models available at any given moment.
Rather than betting on a single provider, Apple would effectively let the market decide which model handles which task, preserving competition and giving users access to capabilities Apple has struggled to build itself.
That is the other side of the announcement, and it is harder to spin positively.
The Extensions system arrives against a backdrop of repeated failure in Apple's own AI efforts.
The company announced an overhauled, AI-powered Siri at its developer conference in June 2024 and marketed the technology alongside the iPhone 16 launch, but was forced to publicly delay the release into 2026 after the work fell behind schedule.
The fallout claimed Apple's AI chief, John Giannandrea, whose role was dramatically reduced in March 2025 before he formally departed the company last month.
His team had reversed course on plans for the Siri upgrade so many times that colleagues reportedly nicknamed the AI division "AIMLess".
Apple replaced Giannandrea with Amar Subramanya, a former head of engineering for Google's Gemini Assistant, a hire that itself signalled the direction of travel.
In January, Apple and Google announced a multi-year partnership under which future Apple Foundation Models would be based on Gemini, meaning Google's technology would power the native version of Siri and Apple Intelligence features regardless of whether a user ever touches the Extensions menu.
Opening the platform to Claude and others adds a veneer of consumer choice, but the underlying architecture tells a different story: Apple tried to build competitive AI in-house, failed, and is now outsourcing the intelligence layer of its most important product to rivals.
Related reading
- Nvidia launches open-source quantum AI models as it deepens grip on chip design workflows
- Nvidia quantum AI launch triggers buying frenzy across Asian and US tech stocks
- Nvidia says the open versus proprietary AI debate is the wrong argument
For the billion-plus iPhone users worldwide, the practical benefits are clear: better AI responses, continuity with existing chatbot subscriptions, and the freedom to switch providers as models improve.
For Apple, it is a pragmatic acknowledgement that in a market moving as fast as artificial intelligence, controlling the hardware and the distribution may matter more than controlling the model.
The recap
- Apple will allow Siri to use third-party AI models.
- Users can set custom voices per external AI model.
- Apple plans to deliver a new Siri using Gemini models.