OpenAI has an AI text detector but doesn’t want to release it



OpenAI has developed some new tools to detect content generated by ChatGPT and its AI models, but it isn’t going to deploy them just yet. The company has come up with a way to overlay AI-produced text with a kind of watermark. This embedded indicator might achieve the goal of divining when AI has written some content. However, OpenAI is hesitant to offer this as a feature when it might harm those using its models for benign purposes.

OpenAI’s new method would employ algorithms capable of embedding subtle markers in text generated by ChatGPT. Though invisible to the naked eye, the tool would use a specific format of words and phrases that signal the text’s origin from ChatGPT. There are obvious reasons this might be a boon in generative AI as an industry, as OpenAI points out. Watermarking could play a critical role in combating misinformation, ensuring transparency in content creation, and preserving the integrity of digital communications. It’s also similar to a tactic already employed by OpenAI for its AI-generated images. The DALL-E 3 text-to-image model produces visuals with metadata explaining their AI origin, including invisible digital watermarks that can even make it through any attempts to remove them through editing. 



Source link

Previous articleUncanny Me review – exploration of cloning tech fraught with moral and ethical questions | Film