Google will label AI-generated images in search results
Google will soon start recognising when content in search and ad results is generated by AI — if you know where to look.
on 17 september blog postThe tech giant announced that in the coming months, metadata in search, images and ads will indicate whether an image was taken with a camera, edited in Photoshop or created with AI. Google joins other tech companies, including Adobe, in labeling AI-generated images.
What are C2PA and content credentials?
Contents
The AI watermarking standards were created by the Coalition for Content Provenance and Authenticity, a standards body that Google joined in February. C2PA was founded by Adobe and the nonprofit Joint Development Foundation to develop a standard for tracing the provenance of online content. C2PA’s most significant project to date has been their AI labeling standard, Content Credentials.
Google helped develop version 2.1 of the C2PA standard, which the company says increases protection against tampering.
See: OpenAI said in February that its photorealistic Sora AI videos would include C2PA metadata, but Sora isn’t yet available to the public.
Amazon, Meta, OpenAI, Sony and other organizations are part of C2PA’s steering committee.
“Content credentials could act as a digital nutrition label for all types of content — and be a foundation for rebuilding trust and transparency online,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe, wrote in an article. Press release In October 2023.
‘About this image’ will display C2PA metadata on Circle Search and Google Lens
C2PA introduced its labeling standard faster than most online platforms. The “About this image” feature, which allows users to view metadata, only appears in Google Images, Circle to Search, and Google Lens on compatible Android devices. Users must manually access the menu to view the metadata.
In Google Search Ads, “our goal is to enhance this (C2PA watermarking) over time and use C2PA signals to communicate how we enforce key policies,” Laurie Richardson, Google’s vice president of trust and security, wrote in a blog post.
Google also plans to include C2PA information on YouTube videos captured by the camera. The company plans to reveal more details later this year.
Correct AI image attribution is vital for business
Businesses should ensure that employees are aware of the spread of AI-generated images and train employees to verify the source of an image. This helps prevent the spread of misinformation and avoid potential legal trouble if an employee uses images they are not authorized to use.
Using AI-generated images in business can muddy the waters around copyright and attribution, as it can be difficult to determine how an AI model has been trained. AI images can sometimes be subtly inaccurate. If a customer wants a specific detail, any mistake can undermine trust in your organization or product.
C2PA should be used in accordance with your organization’s generative AI policy.
C2PA is not the only way to identify AI-generated content. Visible watermarking and perceptual hashing – or fingerprinting – are sometimes offered as alternative options. In addition, artists can use data poisoning filters such as Nightshade to confuse generative AI, preventing AI models from being trained on their work. Google has launched its own AI-detecting tool, SynthID, which is currently in beta.
#Google #label #AIgenerated #images #search #results