Google will soon start identifying when content in search and ad results is generated by AI — if you know where to look.

In a Sep. 17 blog post, the tech giant announced that, in the coming months, metadata in search, images, and ads will indicate whether an image was photographed with a camera, edited in Photoshop, or created with AI. Google joins other tech companies, including Adobe, in labeling AI-generated images.

What are the C2PA and Content Credentials?

The AI watermarking standards were created by the Coalition for Content Provenance and Authenticity, a standards body that Google joined in February. C2PA was co-founded by Adobe and the nonprofit Joint Development Foundation to develop a standard for tracing the provenance of online content. C2PA’s most significant project so far has been their AI labeling standard, Content Credentials.

Google helped develop version 2.1 of the C2PA standard, which, the company says, has enhanced protections against tampering.

SEE: OpenAI said in February that its photorealistic Sora AI videos would include C2PA metadata, but Sora is not yet available to the public.

Amazon, Meta, OpenAI, Sony, and other organizations sit on C2PA’s steering committee.

“Content Credentials can act as a digital nutrition label for all kinds of content — and a foundation for rebuilding trust and transparency online,” wrote Andy Parsons, senior director of the Content Authenticity Initiative at Adobe, in a press release in October 2023.

‘About this image’ to display C2PA metadata on Circle to Search and Google Lens

C2PA rolled out its labeling standard faster than most online platforms have. The “About this image” feature, which allows users to view the metadata, only appears in Google Images, Circle to Search, and Google Lens on compatible Android devices. The user must manually access a menu to view the metadata.

In Google Search ads, “Our goal is to ramp this [C2PA watermarking] up over time and use C2PA signals to inform how we enforce key policies,” wrote Google Vice President of Trust and Safety Laurie Richardson in the blog post.

Content Credentials badge.
C2PA created this Content Credentials badge as a universal icon for image attestation. Image: C2PA

Google also has plans to include C2PA information on YouTube videos captured by a camera. The company plans to reveal more information later this year.

Correct AI image attribution is important for business

Businesses should ensure employees are aware of the spread of AI-generated images and train employees to verify an image’s provenance. This helps prevent the spread of misinformation and prevents possible legal trouble if an employee uses images they are not authorized to use.

Using AI-generated images in business can muddy the waters around copyright and attribution, as it can be difficult to determine how an AI model has been trained. AI images can sometimes be subtly inaccurate. If a customer is seeking a specific detail, any mistake could reduce trust in your organization or product.

C2PA should be used in accordance with your organization’s generative AI policy.

C2PA isn’t the only way to identify AI-generated content. Visible watermarking and perceptual hashing — or fingerprinting — are sometimes floated as alternative options. Furthermore, artists can use data poisoning filters, such as Nightshade, to confuse generative AI, preventing AI models from being trained on their work. Google launched its own AI-detecting tool, SynthID, which is currently in beta.

Subscribe to the Google Weekly Newsletter

Learn how to get the most out of Google Docs, Google Cloud Platform, Google Apps, Chrome OS, and all the other Google products used in business environments. Delivered Fridays

Subscribe to the Google Weekly Newsletter

Learn how to get the most out of Google Docs, Google Cloud Platform, Google Apps, Chrome OS, and all the other Google products used in business environments. Delivered Fridays