Did you see the Keynote?! Tim Cook said the thing! The “one more thing” thing!
Congratulations, Apple Keynote bingo players around the world.
Now, we could talk about Vision Pro for hours but let’s not. We have the whole rest of the year to do that. And is anyone taking bets that Apple’s definition of “early next year” means the first half of the year? Suffice it to say that the Macalope thought Vision Pro was intriguing, technologically impressive, very weird, somewhat off-putting, and, of course, expensive.
(If this is Vision Pro, wake him when Vision Amateur comes out.)
Instead, let’s talk about what Apple didn’t talk about. Apple talked about iPhones, Macs, iPads, Apple TVs, HomePods, the Apple Watch, operating systems, dystopian headsets, health, exercise, marching bands… so many things.
But there was no mention of AI.
This is in stark contrast to its competitors at their recent events. Google, for example, said “AI” over 140 times at I/O last month. That’s a lot of times. The Macalope couldn’t find a definitive count of the number of times Microsoft said “AI” at its Build conference but given the number of AI-related efforts it announced, let’s just say it’s no slouch, either.
Of course, Apple doesn’t really use that term very much. It talks about “machine learning” a lot, but not AI. And it did mention machine learning at yesterday’s keynote, but not in the context of the chat session interaction that Google and Microsoft are currently obsessed with.
IDG
Look, the Macalope gets it. Nerds have wanted a sassy robot buddy to talk to ever since Robby the Robot first talked back to his remarkably sexist and handsy crew mates in Forbidden Planet in 1956. That’s actually about when the term “artificial intelligence” was coined. Which is too bad because it’s such an incredible misnomer. In an interview in the Financial Times, author and noted smart person Ted Chiang laments the term, saying it has caused a lot of confusion.
So if he had to invent a term, what would it be? His answer is instant: applied statistics.
That’s really what it is. It’s not to say it’s not impressive, but there’s no active “intelligence” behind these large language models that are returning what they find as the best answer to questions based on a statistical analysis of a corpus of input. Honestly, no interpretation of the term “intelligence” really applies to them. They’re certainly not sentient and considering the number of times they respond with information that is incorrect, they’re not that smart, either. Claiming that 1 kilogram of steel and 2 kilograms of steel weigh the same might be staking a bold position, but it’s not “intelligent.”
Unless it’s some sort of evil intelligence. Which the Center for AI Safety kind of seems to be worried about.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Way to bring the room down, experts. Tell you what, since “AI” seems to be not accurate and causing confusion, why don’t we just rename it “Torment Nexus”? At least the U.S. Air Force official who said an AI had killed its human operator has since walked back that story as only being a “thought experiment.”
So far as we know, the Torment Nexus hasn’t killed anyone. Yet. Today. As of… whatever time it is currently in your particular time zone.
The Macalope has little doubt that somewhere someone is loudly banging away at a keyboard with the hot take that Apple’s failure to say “AI” at least 100 times at its WWDC Keynote is just another sign that it’s falling further behind the competition.
For his part, the horny one might argue that not only is it too early to be shipping these artificial ding-dongs with a theoretical taste for human blood, but it might also be too early to be shipping the Vision Pro.
Well, to be fair, that’s not shipping for some time. Quite possibly almost a year.