Google Made an AI Detector, but You Can’t Use It


Summary

  • AI detectors are currently unreliable due to false positives and negatives.
  • Google’s SynthID Detector scans for digital watermarks in AI-generated content.
  • SynthID Detector might not be a 100% reliable silver bullet, but it’s probably as reliable as a detector can get for AI.

AI detectors, while a potentially useful feature, are currently a mess. Tools like ZeroGPT like to do a lot of false positives and false negatives. Still, the need for an actually reliable AI detector is real, and that’s why efforts continue to be made in that direction. Google has its own, though you can’t use it yet.

Google has just announced SynthID Detector, a new verification portal designed to identify content created using its artificial intelligence tools, such as Gemini, the Imagen image generation model, or the Veo video generation model. The SynthID Detector works by scanning uploaded media for an imperceptible digital watermark, also named SynthID. Google has been developing this watermarking technology to embed directly into content generated by its AI models, including Gemini (text and multimodal), Imagen (images), Lyria (audio), and Veo (video). According to the company, over 10 billion pieces of content have already been watermarked using this system. This is, then, a Google-made tool that looks for that watermark and tells you whether something is AI-generated or not.

SynthID Detector-anim
Google

When you upload a file—be it an image, audio track, video, or text document—to the SynthID Detector portal, it looks around to see whether this embedded watermark is present. And if it is, the portal will indicate that the content is likely AI-generated and, in some cases, highlight specific portions where the watermark is most prominently detected. For one, in audio files, it can point out segments containing the watermark, and for images, it can indicate areas where the digital signature is most likely present.

Related


6 Best Gemini Features to Try on Your Google Pixel 9

You’ll love them all.

What I still don’t love about this is that it still seems to do a lot of guesswork. The detector can be “unsure” about certain parts, which is not a good omen for a supposedly reliable watermarking method that can withstand alterations and modifications. Just like it can be unsure about some bits, it could detect a watermark where there isn’t one, or it could fail to detect something AI-generated. I’d say it would be more prone to false positives than false negatives, but false positives can still be a problem. I’m sure it will continue to be improved upon, though. A first-party tool like this might be the most reliable way right now to find out if something was AI-generated, but I wouldn’t say that there’s still a 100% reliable, bulletproof method to catch them all.

This detector is currently rolling out to a few folks in an early access manner, and it will be followed by a limited rollout for journalists, media professionals, and researchers via a waitlist.

Source: Google



Source link

Previous articleGoogle CEO: AI is not a ‘zero-sum moment’ for search