Google DeepMind Launches Open-Source Watermark Tool to Help Detect AI-Generated Text

Photo by Agence Olloweb on Unsplash

Google DeepMind Launches Open-Source Watermark Tool to Help Detect AI-Generated Text

Reading time: 2 min

In a Rush? Here are the Quick Facts!

  • Google DeepMind launched SynthID-Text, a new free open-source tool
  • SynthID technology can now detect AI-generated text, audio, video, and images
  • The research was published in Nature with more technical details

Google DeepMind launched an open-source watermark tool called SynthID-Text this Wednesday to help detect AI-generated text. The tool is available to businesses and developers for free and works by embedding invisible watermarks—undetectable to the human eye—into the text during generation, by altering the probabilities of words.

“Here we describe SynthID-Text, a production-ready text watermarking scheme that preserves text quality and enables high detection accuracy, with minimal latency overhead,” states the abstract of the research published in Nature. “To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems.”

According to MIT Technology Review, the tech giant’s AI research laboratory developed the SynthID technology to create multiple AI watermark tools that can now recognize AI-generated text, music, video, and images. Google DeepMind shared a video explaining how the technology works across multiple types of media.

SynthID is available through the company’s Google Responsible Generative AI Toolkit, and researchers are working along with Hugging Face—a collaborative platform for developers that hosts other open-source projects like LeRobot’s tutorial for building AI-powered robots at home—to share it on their site as well.

“Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly,” said Pushmeet Kohli, the vice president of research at Google DeepMind, to MIT Technology Review.

SynthID has been tested in Google’s Gemini products, and millions of users weren’t able to differentiate between watermarked and non-watermarked content. However, researchers acknowledged that it has limitations when the text has been edited or translated, but they remain optimistic and believe the tool could help combat misinformation and improve AI safety.

Multiple tech companies have been announcing AI-labeling strategies for the past few months. Meta announced in February a system to identify AI content across Instagram, Facebook, and Threads, Google required users to label AI content in March, and Tiktok added labels to AI-generated content in May.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...