Siri, Can I Speak to Someone Smarter?
Why Apple Needs to Stop Playing Voice Assistant Gatekeeper
Craig Federighi didn't say "we can't do everything" exactly, but he might as well have. In a recent interview, Apple's software chief made a telling comparison: just like the internet spawned Amazon and Google without Apple needing to build those services, AI will create opportunities that "don't necessarily happen inside of Apple or ultimately happen with Siri."
Read between the lines. Here's Apple, the company that supposedly thinks different, essentially admitting that Siri won't be the answer to every voice interaction. After 14 years of trying to make their assistant competitive, they're quietly acknowledging the obvious.
And you know what? They're right. Siri can't be everything to everyone.
The Great Siri Stagnation
Siri wasn't Apple's from the start. It was an acquisition, a startup's AI that Apple bought and integrated into the iPhone. And for 2011? It was pretty impressive. Set timers, check weather, tell jokes. Good enough for its time. But here we are in 2025, and Siri still feels stuck in that era while ChatGPT casually writes your emails and Google Assistant actually understands context.
The gap isn't subtle. It's embarrassing. Ask Siri to "call Mom" and it might try calling someone named "Tom" instead. Basic speech recognition that should work flawlessly still trips up regularly. Meanwhile, ChatGPT handles complex, nuanced requests without breaking a sweat.
Apple delayed their "AI-powered Siri 2.0" until 2026 because it only worked two-thirds of the time. Think about that. After billions in R&D, their next-generation assistant fails one out of every three attempts. That's not shipping quality. That's not even beta quality.
Meanwhile, users are literally hacking around Siri. The hottest iOS trick right now? Creating Siri Shortcuts that route your voice questions to ChatGPT's text interface. People are using Apple's assistant to reach a better assistant. That's not user innovation. That's user desperation.
The Platform vs Product Problem
This highlights a fundamental tension that Ben Thompson often writes about: Apple excels at integrated products but struggles with platform services. Apple's strength is controlling every piece of the puzzle. They design the chip, the operating system, the apps, and make them work together seamlessly. It's why the iPhone feels so polished and cohesive.
But voice assistants aren't really products in Apple's traditional sense. They're platforms for intelligence that need to connect to countless external services, handle infinite types of queries, and improve through data from millions of interactions. Apple treats Siri like a product (controlled, polished, integrated) when it should be treated like a platform (open, iterative, collaborative). That's why Siri feels static while ChatGPT, built as an open platform that anyone can build on, keeps getting smarter.
Think about it: Siri isn't just an app. It's the voice interface for your entire digital life. Your iPhone, your car, your home, your wearables. When that interface is mediocre, everything else suffers. Your Apple Watch becomes less useful. CarPlay feels clunky. The Vision Pro's voice controls disappoint.
Apple's trying to be both the platform (iOS) and the product (Siri). But as Federighi admitted, they can't do everything. So why not do what they do best and let others handle what they don't?
The WebKit Solution
Apple has solved this exact problem before. Remember when Safari was the only iOS browser? Then they allowed Chrome and Firefox but required them to use WebKit underneath. Users got choice. Apple kept control. Everyone won (partially).
Voice assistants could work the same way. Allow Google Assistant, Alexa, and ChatGPT on iOS, but make them use Apple's Intent APIs for system actions. Want to send a text? You go through Apple's messaging framework. Want to set a reminder? Apple's reminder system handles it.
This is essentially the same concept as tool calling in modern LLMs. ChatGPT doesn't directly access your calendar or send emails. Instead, it calls predefined functions that handle those actions safely. Apple could apply this exact model to voice assistants: let them be as smart as they want for understanding and conversation, but require them to use Apple's secure "tools" (Intent APIs) for any actual device actions.
The infrastructure already exists. Apple spent years building SiriKit and App Intents precisely for this kind of interaction. Third-party apps can already expose their capabilities to voice commands. The missing piece isn't technical. It's policy.
Consider what's already happening: users can download ChatGPT, Claude, or other AI apps and talk to them directly. These apps often have superior speech-to-text recognition compared to Siri. They can process requests through powerful cloud models and save conversations as text within the app. Some even support function calling, making them natural "glue" between different app capabilities.
But here's the limitation: these powerful voice assistants are trapped in sandbox apps. They can't freely interact with other iOS apps or system functions. You can't ask ChatGPT to "send this to Mom via Messages" or "add this to my calendar" because it lacks system-level access. The solution would be to let third-party voice assistants get permission to take over the voice interface while using Apple's Intent APIs to safely perform system actions. The brain gets smarter, but the security model stays intact.
Why Apple Resists (And Why They Shouldn't)
Apple's concerns are predictable. User experience consistency. Security risks. Brand dilution. The same arguments they made against third-party keyboards, browsers, and default apps. All fears that proved manageable.
Allowing other assistants means less control over the experience for sure. But it also means users might actually enjoy using voice on their iPhones again. Yes, there are privacy implications. But Apple can sandbox these assistants just like they do everything else.
More fundamentally, if users explicitly consent to using a third-party voice assistant, Apple shouldn't play the overprotective parent. Adults can decide for themselves whether they want Google or OpenAI processing their voice queries. Apple's job is to provide clear warnings and secure frameworks, not to make choices for users who are perfectly capable of weighing trade-offs themselves.
The brand dilution argument is weakest of all. Nobody thinks less of iPhone because you can use Chrome. If anything, choice strengthens the platform. An Android user who loves Google Assistant might switch to iPhone if they could bring their preferred AI along.
The Real Choice
Apple can keep pretending Siri will somehow catch up to Google Assistant, Alexa, and the next generation of voice assistants. They can keep telling users to wait for the next big update, the next breakthrough, the next promise of intelligence.
Or they can admit what Federighi already said out loud: they can't do everything. They can embrace their role as the platform company and let the best voices win.
The iPhone succeeded because it became the best place to run apps, not because Apple made every app. The same logic applies to voice assistants. Make iOS the best place to run AI, and the AI will follow.
It's time Apple listened to their own advice. They can't do everything. But they can do something even better: they can build the platform where everyone else does their best work.
That's not admitting defeat. That's playing to win.