Conversational Siri has to be spectacularly good if we won’t get it until 2027


One of the odder aspects of Apple’s history is that the company has gone in 14 years from one of the leaders in intelligent assistants to one of the biggest laggards.

We’ve gone from the futuristic-feeling Siri way back in 2011 to a painfully inadequate-feeling Apple Intelligence in 2025 ….

14 years of Siri

I can still remember the keynote presentation for the iPhone 4S, where Siri was the reason I had to rush out and buy one.

Apple didn’t create Siri: it first debuted as a third-party iPhone app. But it was Steve Job’s decision to buy the tech and integrate it into the iPhone that brought it into the public eye and made an intelligent assistant such an integral part of what we now expect from a smartphone.

Fourteen years should have given Apple plenty of time to turn Siri into a phenomenally powerful assistant, and yet … that hasn’t happened. Back in 2015, I outlined the future capabilities I’d like to see, including giving it the ability to interact with our apps. It’s taken a full decade for Apple to even begin providing this feature!

What’s even wilder to me is that in 2018 I created a list of really basic things Siri still couldn’t do, and it still can’t do several of these things today!

Today, Siri looks like a Lada while ChatGPT and its peers are Mercedes.

Apple does, however, have a couple of excuses for failing to match the performance of today’s LLMs.

Reliability is one factor

First, reliability.

OpenAI, Google, and others took the “move fast and break things” approach to AI. When ChatGPT and Google Bard were still new, I highlighted their ability to get things spectacularly wrong. For example, when asked to help write a scientific paper, ChatGPT simply made up non-existent references. Google’s Bard even gave the wrong answer to a question during a live demo of how clever it was.

I made the point then that Siri’s spoken responses made this sort of error even more dangerous.

If there’s one thing more dangerous than not knowing something important, it’s asking for the information and being given the wrong answer with great confidence.

When a Google search shows you conventional results next to a chat window answering the same question, it’s very easy for the company to include prominent warnings that the chat answer may not be accurate.

But Siri is designed to provide spoken answers to verbal questions. Even more annoying than Siri ‘answering’ a question with “Here’s what I found on the web,” would be “Here’s a lengthy answer which you first have to listen to, then I’ll note that it may not be correct, and recommend that you search the web.”

Privacy is another

The other huge factor here is privacy. Siri has always operated according to two fundamental privacy-respecting principles:

  • On-device processing wherever possible
  • Anonymized requests when Apple servers are needed

On-device processing can never be as smart as processing done by powerful machines in data centers, and ensuring a Siri server doesn’t know who you are also hampers its ability to be as smart as a Google server that has access to your search history to know a million things about you.

I argued last year that while the wait for LLM Siri is frustrating, the privacy payoff will be worth it in the long run.

A massive amount of the personal data needed to allow Siri to be a truly intelligent and helpful intelligent assistant is stored right there on our own devices, in Apple Calendar, Contacts, Files, Health, Mail, Maps, Messages, Wallet and so on. We will also have the option to grant Siri access to the specific third-party apps we choose, again right there on our own devices.

Once Siri is able to access those apps, then it can finally become every bit as powerful as competing systems – while still protecting our privacy.

Long-term, that’s the AI future I want: an assistant which knows a great deal about my life so it can act like a human PA, but only on my devices with my permission. That’s the LLM Siri Apple is building, and while I’d love to have those capabilities right now, I do think the wait will be worth it.

We’re now expecting an even longer delay

I wrote that when conversational Siri was expected to launch in early 2026, which already felt like a long wait. But the latest word from Bloomberg is that the launch has now been pushed back to 2027 – and maybe even later!

Gurman says that employees within Apple’s AI division now believe that a more conversational version of Siri won’t reach consumers until iOS 20 ‘at best’.

That really ups expectations

Two things are universally true about anything for which we have to wait a long time. One, the wait is painful, and feels like forever. Two, once it’s over, the joy of obtaining it means we soon forget the pain of the delay.

Fast-forward to a future when conversational Siri is on our phones, then if it truly lives up to expectations, and finally delivers the level of smarts we really want, then this will be true. We’ll roll our eyes about how long it took Apple to get there, but all will be forgiven.

But that length of delay dramatically increases expectations of what 2027 Siri needs to be. Just think about what ChatGPT and Claude and Gemini and Llama and DeepSeek will be able to do by then!

Think about Amazon’s new conversational Alexa, and what that will be capable of with another two years of development, using all of the data the company has amassed about the requests people are making of it.

Siri will no longer be judged against the capabilities of today’s chatbots, it will be judged against the ones we’ll have two years from now. That’s going to be a phenomenally high bar, and Apple really needs to reach it.

Image: Michael Bower/9to5Mac

FTC: We use income earning auto affiliate links. More.



Source link

Previous articleMajor Indexes Poised to Open Higher; Bitcoin Stocks Jump on Trump Announcement of Crypto Reserve