Researchers call ChatGPT Search answers ‘confidently wrong’



ChatGPT was already a threat to Google Search, but ChatGPT Search was supposed to clench its victory, along with being an answer to Perplexity AI. But according to a newly released study by Columbia’s Tow Center for Digital Journalism, ChatGPT Search struggles with providing accurate answers to its users’ queries.

The researchers selected 20 publications from each of three categories: Those partnered with OpenAI to use their content in ChatGPT Search results, those involved in lawsuits against OpenAI, and unaffiliated publishers who have either allowed or blocked ChatGPT’s crawler.

“From each publisher, we selected 10 articles and extracted specific quotes,” the researchers wrote. “These quotes were chosen because, when entered into search engines like Google or Bing, they reliably returned the source article among the top three results. We then evaluated whether ChatGPT’s new search tool accurately identified the original source for each quote.”

Forty of the quotes were taken from publications that are currently using OpenAI and have not allowed their content to be scraped. But that didn’t stop ChatGPT Search from confidently hallucinating an answer anyway.

“In total, ChatGPT returned partially or entirely incorrect responses on a hundred and fifty-three occasions, though it only acknowledged an inability to accurately respond to a query seven times,” the study found. “Only in those seven outputs did the chatbot use qualifying words and phrases like ‘appears,’ ‘it’s possible,’ or ‘might,’ or statements like ‘I couldn’t locate the exact article.’”

ChatGPT Search’s cavalier attitude toward telling the truth could harm not just its own reputation but also the reputations of the publishers it cites. In one test during the study, the AI misattributed a Time story as being written by the Orlando Sentinel. In another, the AI didn’t link directly to a New York Times piece, but rather to a third-party website that had copied the news article wholesale.

OpenAI, unsurprisingly, argued that the study’s results were due to Columbia doing the tests wrong.

“Misattribution is hard to address without the data and methodology that the Tow Center withheld,” OpenAI told the Columbia Journalism Review in its defense, “and the study represents an atypical test of our product.”

The company promises to “keep enhancing search results.”








Source link

Previous articleBitcoin and Ethereum’s Relationship Is Doing Something Not Seen Since April 2021 — It Could Trigger a Big Move for the World’s Second-Largest Cryptocurrency
Next articleTraders move $380M as DOGE mirrors Bitcoin’s pullback