Apple Intelligence managed to create a piece of Luigi Mangione fake news last week, thanks to the notification summary feature. It somehow decided that the suspect in the killing of United Health CEO Brian Thompson had shot himself.
The mistake, in itself, is not surprising: AI systems make this kind of error all the time. What is rather more surprising is that Apple allowed it to happen when it could have been easily avoided …
AI mistakes can be amusing or dangerous
Today’s generative AI systems can often deliver impressive results, but they of course aren’t actually intelligent – and that has seen them making some pretty spectacular mistakes.
Many of these are amusing. There was the McDonalds drive-through AI system which kept adding chicken nuggets to customer orders until it hit a total of 260; Google reporting a geologist recommendation to eat one rock per day, and suggesting that we use glue to help cheese stick to pizza; and Microsoft recommending a food bank as a tourist destination.
But there have been examples of dangerous AI advice. There was an AI-written book on mushroom foraging which recommended tasting mushrooms as a way to identify poisonous ones; mapping apps that directed people into wildfires; and the Boeing system which caused two airliners to crash, killing 346 people.
Or they can be simply embarrassing
The Apple Intelligence summary of a BBC News story was neither amusing nor dangerous, but was embarrassing.
Apple Intelligence, launched in the UK earlier this week, external, uses artificial intelligence (AI) to summarise and group together notifications. This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.
It wasn’t the first time we’ve seen this – a previous Apple Intelligence notification summary claimed that Israeli prime minister Benjamin Netanyahu had been arrested, when the actual story was the ICC issuing a warrant for his arrest.
Mangione fake news was avoidable
It’s impossible to avoid all these errors; it’s simply in the nature of generative AI systems to make them.
This is all the more true in the case of Apple’s notifications summary of news headlines. Headlines are, by their very nature, very partial summaries of a story. Apple Intelligence is attempting to further condense a highly-condensed version of a news story; a very brief summary of a very brief summary. It’s not at all surprising that this sometimes goes badly wrong.
While Apple can’t prevent this in general, it could at least prevent it happening on particularly sensitive stories. It could trap keywords like killing, killed, shooter, shooting, death, and so on, and flag those for human review before they are used.
In this particular case, the error was simply embarrassing, but it’s not at all hard to see how a mistake on a sensitive topic like this could lead to making a lot of people very angry. Imagine a summary which appears to blame the victims of a violent crime or disaster, for example.
Of course, human review would be an additional task for the Apple News team, but Apple could get 24/7 dedicated checking for the cost of half a dozen employees working shifts. That seems a rather small expense on Apple’s part to prevent what could be a major PR disaster for the still-fledgling feature.
Photo by Jorge Franganillo on Unsplash
FTC: We use income earning auto affiliate links. More.