Reddit users ‘psychologically manipulated’ by unauthorized AI experiment


Trigger warning for brief references to sexual assault.

It’s been discovered that millions of Reddit users were deceived and “psychologically manipulated” by an unauthorized AI experiment performed by researchers from the University of Zurich …

The university secretly used AI bots to post in the highly-popular Change My View subreddit, with large language models taking on a variety of personas, including a rape victim and a trauma counsellor.

The university disclosed the deception to moderators after it had taken place, acknowledging that researchers broke the rules of the subreddit.

Over the past few months, we used multiple accounts to posts published on CMV. Our experiment assessed LLM’s persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful.

We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”

The AIs took on some extremely provocative identities:

  • AI pretending to be a victim of rape
  • AI acting as a trauma counselor specializing in abuse
  • AI accusing members of a religious group of “caus[ing] the deaths of hundreds of innocent traders and farmers and villagers.”
  • AI posing as a black man opposed to Black Lives Matter
  • AI posing as a person who received substandard care in a foreign hospital

CMV moderators say that the study was a serious ethical violation.

If OpenAI can create a more ethical research design when doing this, these researchers should be expected to do the same. Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.

AI was used to target OPs in personal ways that they did not sign up for, compiling as much data on identifying features as possible by scrubbing the Reddit platform.

Data the researchers attempted to compile on CMV users included gender, age, ethnicity, location, and political orientation.

CMV moderators filed a formal complaint with the university’s ethics commission, which responded by stating that it had issued a formal warning to the lead researcher, and would be boost prior reviews of proposed studies – but said that publication of the paper would go ahead.

This project yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields.

9to5Mac’s Take

Subreddit users are understandably outraged by the deception and the decision to proceed with publication.

The university cannot on the one hand warn researchers about unethical behavior, and promise to prevent a similar thing happening again, while at the same time permitting the hugely unethical paper to be published. The only meaningful consequence would be to bar publication, ensuring that other researchers will not want to risk all of their time and work being wasted in similar studies.

Highlighted accessories

Photo by Werclive on Unsplash

FTC: We use income earning auto affiliate links. More.



Source link

Previous articleMonero Price Spikes 50% After Suspected $330M Bitcoin Theft – The Shib Daily