Some of the largest generative AI companies operating in the U.S. plan to watermark their content, a fact sheet from the White House revealed on Friday, July 21. Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI agreed to eight voluntary commitments around the use and oversight of generative AI, including watermarking. In September, eight more companies agreed to the voluntary standards: Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI and Stability AI.
This follows a March statement about the White House’s concerns about the misuse of AI. The agreement comes at a time when regulators are nailing down procedures for managing the effect generative artificial intelligence has had on technology and the ways people interact with it since ChatGPT put AI content in the public eye in November 2022.
- What are the eight AI safety commitments?
- Government regulation of AI may discourage malicious actors
What are the eight AI safety commitments?
The eight AI safety commitments include:
- Internal and external security testing of AI systems before their release.
- Sharing information across the industry and with governments, civil society and academia on managing AI risks.
- Investing in cybersecurity and insider threat safeguards, specifically to protect model weights, which impact bias and the concepts the AI model associates together.
- Encouraging third-party discovery and reporting of vulnerabilities in their AI systems.
- Publicly reporting all AI systems’ capabilities, limitations and areas of appropriate and inappropriate use.
- Prioritizing research on bias and privacy.
- Helping to use AI for beneficial purposes such as cancer research.
- Developing robust technical mechanisms for watermarking.
The watermark commitment involves generative AI companies developing a way to mark text, audio or visual content as machine-generated; it will apply to any publicly available generative AI content created after the watermarking system is locked in. Since the watermarking system hasn’t been created yet, it will be some time before a standard way to tell whether content is AI generated becomes publicly available.
SEE: Hiring kit: Prompt engineer (TechRepublic Premium)
Government regulation of AI may discourage malicious actors
Former Microsoft Azure global vice president and current Cognite chief product officer Moe Tanabian supports government regulation of generative AI. He compared the current era of generative AI with the rise of social media, including possible downsides like the Cambridge Analytica data privacy scandal and other misinformation during the 2016 election, in a conversation with TechRepublic.
“There are a lot of opportunities for malicious actors to take advantage of [generative AI], and use it and misuse it, and they are doing it. So, I think, governments have to have some watermarking, some root of trust element that they need to instantiate and they need to define,” Tanabian said.
“For example, phones should be able to detect if malicious actors are using AI-generated voices to leave fraudulent voice messages,” he said.
“Technologically, we’re not disadvantaged. We know how to [detect AI-generated content],” Tanabian said. “Requiring the industry and putting in place those regulations so that there is a root of trust that we can authenticate this AI generated content is the key.”