OpenAI develops new tools to identify AI-generated content



What you need to know

  • OpenAI recently announced its plan to develop new tools to help identify AI-generated content using its tools, including tamper-resistant watermarking.
  • The ChatGPT maker is teaming up with Microsoft to launch a $2 million societal resilience fund to help drive the adoption and understanding of provenance standards.
  • Applications for early access to OpenAI’s image detection classifier to our first group of testers are open through its Researcher Access Program.

With the prevalence of sophisticated generative AI tools like Image Creator by Designer (formerly Bing Image Creator), Midjourney, and ChatGPT, it’s increasingly difficult to distinguish real and AI-generated content. Major tech corporations like OpenAI and Microsoft have made significant strides toward making it easier for users to identify AI-generated content.

OpenAI started watermarking images generated using DALL-E 3 and ChatGPT, but the company admits it’s “not a silver bullet to address issues of provenance.” As we forge toward the forthcoming US Presidential elections, AI deepfakes and misinformation continue to flood the internet





Source link

Previous articleBitcoin, Binance, Ethereum, Solana and Ripple: The Biggest Crypto News of the Past Week