Google To Flag AI-Generated Images In Search
In a Rush? Here are the Quick Facts!
- Google Rolls Out Tool to Identify AI-Edited Images in Search Result
- Google joined the C2PA to create standards that trace digital content origins
- The system relies on the C2PA standard but adoption is limited among companies and tools.
Google announced today that it plans to roll out changes to Google Search to make it clearer which images in results were generated or edited using AI tools.
The tech giant is leveraging a technology called “Provenance” to identify and label such images, aiming to enhance user transparency and combat the spread of misinformation.
Google explained that Provenance technology can determine if a photo was captured with a camera, altered by software, or created entirely by generative AI.
This information will be made available to users through the “About this image” feature, providing them with more context and helping them make informed decisions about the content they consume.
To bolster its efforts, Google joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member earlier this year. The C2PA has been working to develop standards for tracing the history of digital content, including images and videos.
However, TechCrunch (TC) notes that only images with “C2PA metadata” will be flagged as AI-manipulated in Google Search.
Although companies like Google, Amazon, Microsoft, OpenAI, and Adobe support C2PA, the standard has not been widely adopted, as reported by TC.
As noted by The Verge, only a limited number of generative AI tools and cameras, such as those from Leica and Sony, support C2PA specifications.
Additionally TC notes that C2PA metadata, like any form of metadata, can be removed, damaged, or become unreadable. Many popular AI tools, like Flux, which powers xAI’s Grok chatbot, don’t include C2PA metadata, partly because their developers haven’t adopted the standard.
While this initiative shows promise in combating harmful deepfakes content, its success lies in widespread adoption of the C2PA watermarking system by camera manufacturers and generative AI developers.
However, even with C2PA in place, malicious actors can still remove or manipulate an image’s metadata, potentially undermining Google’s ability to accurately detect AI-generated content.
Leave a Comment
Cancel