It is safe to say that the big technology players have been caught off guard by the popularity of generative AI. ChatGPT quickly became a household name and even got a South Park episode dedicated to it, and AI artwork from Dall-E, Stable Diffusion, or Midjourney is flooding the web. Tom Hanks warned of an AI-generated fake of him promoting some dental plan he has nothing to do with.
Technology heavy-hitters like Microsoft, Amazon, and Google have been rushing to integrate generative AI into their products and services, and Apple is no exception. In the latest version of Mark Gurman’s Power On newsletter for Bloomberg, he offers some insight and details into the way Apple has been working hard, in secret of course, to add big AI features to all its major products and services.
Apple and AI
Apple already uses AI throughout its products and services. It’s a fundamental part of how the iPhone takes photos and videos (and indeed, every modern smartphone). It identifies people, pets, and thousands of different objects in your photos. From Siri to crash detection, selecting text in images, facial recognition, sleep tracking, and so much more rely on machine learning and artificial intelligence.
But most of this is not generative AI. In iOS 17, Apple switched to a new transformer model for dictation and speech recognition in Siri. It can suggest multiple future words, even completing sentences sometimes. It’s a huge upgrade and maybe the first use of these transformer models, upon which so much generative AI is based, in a shipping Apple product.
Tim Cook has said that Apple has been working on generative AI technology for years, but so have other tech giants. As Gurman says, “I can tell you in no uncertain terms that Apple executives were caught off guard by the industry’s sudden AI fever and have been scrambling since late last year to make up for lost time.” Microsoft rushed up products by partnering with OpenAI and others, Google pushed some of its own technology out the door, and Apple…Apple has improved autocorrect.
Apple goes big on AI in 2024
According to Gurman’s report, there is now a whole-of-Apple effort to bring big AI features (generative and otherwise) to Apple’s releases in 2024. That means you can expect iOS 18 and macOS 15 to feature improved AI capabilities, but also many of Apple’s apps and services.
Apple has had its own large language model (Ajax) for over a year, and has built an internal chatbot with it similar to ChatGPT, which some have taken to calling “AppleGPT.” While Apple isn’t expected to directly release a ChatGPT competitor, its Ajax technology may be used in a number of other products. Other generative AI features are being examined, too.
Senior VP John Giannandrea is in charge of AI at Apple, and together with Craig Federighi (Senior VP in charge of software engineering) is spearheading the effort to bring AI to as much as possible in the iOS 18 timeframe in 2024. Apple is expected to spend about $1 billion in R&D and product development over the next year on its AI efforts.
What does that mean for you? We don’t have details yet, but Giannandrea and his team are expected to produce a new version of Siri that deeply integrates Apple’s new AI technology.
Gurman says there’s an edict to fill iOS 18 with features running on the company’s large language model. Expect features like a Messages app that can produce better suggested replies, for example. Developers may get a new version of Xcode that helps automatically generate code, similar to Github’s Copilot feature. Apple Music could get an AI auto-generated playlist maker or DJ. And the iWork suite (Pages, Numbers, Keynote) may get generative AI tools built in, similar to features Microsoft is building into Word and PowerPoint.
Of course, the sky is the limit with generative AI. Adobe has already built it into Photoshop to aid in expanding photos beyond their original framing, removing objects, or adding in new objects. The same could apply to your photos on an iPhone. Generative AI could produce background audio tracks for iMovie and Final Cut, or royalty-free intro/outro music for podcasts in GarageBand.
There is some discussion within Apple, Gurman says, about whether certain features should be done on-device or in the cloud on Apple’s servers. On-device processing protects privacy, but the largest and most advanced AI models require more powerful server hardware. And on-device processing is harder to update, train, and deploy compared to the server-based approach. Gurman says he expects Apple to follow both approaches, with some features processed on-device but others in the cloud.