Google DeepMind Introduces Watermarking Tool to Detect AI-Generated Text

By:

Published on:

Key Takeaways:
  • Google’s DeepMind has developed a new tool called SynthID-Text to detect AI-generated content by embedding invisible watermarks.
  • The watermark doesn’t affect the quality of AI-generated text and can be detected using special software, helping to identify fake or misleading content.
  • Though not perfect, this technology could improve transparency in AI use, making it easier to distinguish between human and AI-generated texts.

The rise of Artificial Intelligence is now a reality, whether we are ready for it or not. The launch of ChatGPT marked a major shift in the tech industry, leading to the widespread use of generative AI in many different areas. However, a significant issue has emerged — much of this AI use is being exploited for harmful purposes, and even the developers seem unable to stop it.

From the moment ChatGPT was made available to the public, it has been used to create fake news, political propaganda, phishing scams, and spam messages. These AI-generated texts have become one of the biggest challenges for the modern internet. The problem is growing, as these fake texts are becoming more human-like and harder to detect.

Google now claims to have found a solution to this issue. Its AI research division, DeepMind, has published a study in Nature that explains how a ‘watermark’ for text could help identify AI-generated content, potentially exposing false information.

The concept of watermarking text isn’t new, but it hasn’t been applied to the major language models like ChatGPT, Gemini, and other text-generating AIs yet. The reasons for this have been the lack of accuracy and the extra computing power required to add these marks. However, DeepMind engineers believe they have finally developed a workable solution.

The tool, called SynthID-Text, is based on a new algorithm that applies watermarks to AI-generated content. Rather than adding visible content to the final text, the algorithm introduces a ‘bias’ in the choice of words the AI uses. To the reader, the text will appear normal and meet the user’s request, ensuring that legitimate AI uses remain unaffected. However, a specialized program can detect this subtle bias to identify if the text was generated by an AI.

DeepMind explains that by intentionally guiding the AI to use specific words during text creation, they can create a ‘signature’ within the text. This signature can then be detected by a program that analyzes the word choices. There are two ways to implement this signature — one method alters the text quality, while the other maintains the original quality without distortion.

In tests conducted on publicly available language models, SynthID-Text showed improved accuracy in detecting AI-generated text compared to current methods. The researchers also analyzed 20 million responses from Google’s Gemini language model and found that the second method of watermarking did not affect the quality of the AI’s responses. Another key advantage of SynthID-Text is its low computational cost, which was described as “negligible” in the study. This makes it more appealing for implementation, as it removes the financial barrier that often discourages companies from adopting watermarking solutions.

However, the technology is not foolproof yet. The researchers caution that AI-generated text can still be altered by simply editing the output, which would remove the watermark. Despite this, DeepMind remains optimistic, stating that the study shows the potential for watermarking tools to improve transparency and promote the responsible use of AI-generated content.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here