What you need to know
- When asked about notable people who died this year, Microsoft Copilot may share a list of people who are still alive.
- In my testing, Copilot shared a list of living persons and people who died in previous years.
- AI chatbots, including Copilot and Google Bard, have issues with sharing factually correct information and often “hallucinate.”
Microsoft Copilot is once again sharing incorrection information. When asked about notable figures that have died in 2024, Copilot lists living celebrities, including Sir David Attenborough. Barring any heartbreaking news that has not been reported, Attenborough is very much alive.
In fact, a school in Leicestershire received a letter from him earlier this week. Attenborough was also named a top British cultural figure in a recent poll. While that designation can be given to someone who has passed, such as when the Late Queen Elizabeth was named a cultural icon, Attenborough received the honor while alive.
The phenomenon was noticed by several people who took to X (formerly Twitter) and other platforms. When asked if he was okay, William Shatner jokingly responded that he was not fine after reading about his death. The Verge shared other examples, including one listing Attenborough as deceased. I’ve seen similar results in my testing. In addition to listing living people as dead, Copilot incorrectly stated several deaths from previous years occurred in 2024.
This is only the latest example of AI getting facts wrong. Copilot has shared false information regarding the US election in the past. Some believe that ChatGPT, which is part of what powers Copilot, has gotten less intelligence since launch. In the early days of the chatbot, Copilot, then known as Bing Chat, shared bizarre and creepy responses.
I have first-hand experience with AI chatbots spreading false information. Last year, I wrote an article about how Google Bard incorrectly stated that it had been shut down. Bing Chat then scanned my article and wrongly interpreted it to mean that Google Bard had been shut down. That saga provided a scary glimpse of chatbots feeding chatbots.
AI often struggles with logic and reasoning. That fact isn’t surprising when you consider how AI works. Tools like Copilot are not actually intelligent. They’re not using reasoning skills in a way that a human would. They’re often tripped up by the phrasing used in prompts and miss key pieces of information in questions. Mix in AI’s struggles to understand satire and you have a dangerous recipe for misinformation.
Fixing AI
Microsoft unveiled a major update to Copilot yesterday. The update is meant to make the AI assist more personal and interactive. As explained by our Senior Editor Zac Bowden, “Microsoft really wants you to view the new Copilot as more than just an AI tool. It wants you to treat it like a friend, whether that be by asking it for advice on how to ask out a crush, venting about work, or chatting about nothing because that’s what people do.”
The new Copilot has features such as “Copilot Voice” that aim to make interaction with the chatbot feel conversational. The tool can also suggest topics to discuss and share summaries of daily news.
A new interface and some voice features may help in making Copilot feel more personal, but I’d prefer my friends, human or digital, don’t share false information and claim living cultural icons are dead. Perhaps more training time will make our digital friend more factual.