Apple’s decision to delay the roll out of the revamped Siri assistant features has led to plenty of speculation over why the company isn’t yet able to deliver the Apple Intelligence-based upgrade.
While Apple itself admitted its ‘taking longer than we thought‘ to get the smarter and more contextually aware Siri ready for prime time, other observers have different theories on why the feature designed for the best iPhone models is experiencing development setbacks.
Bloomberg’s Mark Gurman recently reported the company is struggling with consistency for these features, which include the ability for apps to take action on the user’s behalf in what’s being called “agentic” AI.
The iPhone 16 has £100 off for a limited time
Amazon has just taken £100 off the iPhone 16, making it an affordable upgrade for those who want one of the latest Apple phones at a reasonable price.
- Amazon
- Was £799
- Now just £699
However one developer thinks, current reliability aside, the key issue Apple is grappling with is the security vulnerabilities inherent to the new technology.
Developer Simon Willison (via 9to5Mac) reckons the internal concern may be over something called prompt injection attacks, which are an issue that occurs when malicious actors are able to trick an AI that the request is coming from a user.
On his website Willison wrote: “These new Apple Intelligence features involve Siri responding to requests to access information in applications and then performing actions on the user’s behalf.
“This is the worst possible combination for prompt injection attacks! Any time an LLM-based system has access to private data, tools it can call, and exposure to potentially malicious instructions (like emails and text messages from untrusted strangers) there’s a significant risk that an attacker might subvert those tools and use them to damage or exfiltrating a user’s data.”
Unfortunately, this isn’t something Apple will easily be able to get around because, at present, large language models are inherently susceptible to these kind of attacks. As Daring Fireball’s John Gruber puts it, Apple would have to make “groundbreaking” developments to overcome this issue.
He wrote: “They have to solve a vexing problem — as yet unsolved by OpenAI, Google, or any other leading AI lab — to deliver what they’ve already promised.”
The stakes are too high
Apple continually stakes its reputation on being a privacy first company and has developed trust among the user base for that reason. We saw very recently that Apple is not all talk on this front, pulling an encryption feature in the UK because the government had demanded a back door into it.
A buggy release for this new technology would be largely tolerated by users. However, something that compromises their online security would have huge, huge consequences.
It’s a tough spot for Apple because it cannot afford to be left behind in the AI race, but that may mean taking chances it is not comfortable with.