Man who looked himself up on ChatGPT was told he ‘killed his children’



Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently — yet wrongly — claim that you had been jailed for 21 years for murdering members of your family.

Well, that’s exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAI’s widely used AI-powered chatbot.

Not surprisingly, Holmen has now filed a complaint with the Norwegian Data Protection Authority, demanding that OpenAI be fined for its distressing claim, the BBC reported this week.

In the response to Holmen’s ChatGPT inquiry about himself, the chatbot said he had “gained attention due to a tragic event.”

It went on: “He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son.”

The chatbot said the case “shocked the local community and the nation, and it was widely covered in the media due to its tragic nature.”

But nothing of the sort happened.

Understandably upset by the incident, Holmen told the BBC: “Some think that there is no smoke without fire — the fact that someone could read this output and believe it is true is what scares me the most.”

Digital rights group Noyb has filed the complaint on Holmen’s behalf, stating that ChatGPT’s response is defamatory and contravenes European data protection rules regarding accuracy of personal data. In its complaint, Noyb said that Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”

ChatGPT uses a disclaimer saying that the chatbot “can make mistakes,” and so users should “check important info.” But Noyb lawyer Joakim Söderberg said: “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

While it’s not uncommon for AI chatbots to spit out erroneous information — such mistakes are known as “hallucinations” — the egregiousness of this particular error is shocking.

Another hallucination that hit the headlines last year involved Google’s AI Gemini tool, which suggested sticking cheese to pizza using glue. It also claimed that geologists had recommended that humans eat one rock per day.

The BBC points out that ChatGPT has updated its model since Holmen’s search last August, which means that it now trawls through recent news articles when creating its response. But that doesn’t mean that ChatGPT is now creating error-free answers.

The story highlights the need to check responses generated by AI chatbots, and not to trust their answers blindly. It also raises questions about the safety of text-based generative- AI tools, which have operated with little regulatory oversight since OpenAI opened up the sector with the launch of ChatGPT in late 2022.

Digital Trends has contacted OpenAI for a response to Holmen’s unfortunate experience and we will update this story when we hear back.








Source link

Previous articleBad news Bitcoin bulls, the long-hoped-for retail is already here: CryptoQuant
Next articleBitcoin Surges Past $85,000 as Analysts Spot Key Bullish Signals — TradingView News