Meta calls for an industry initiative to label AI-generated content

Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, Meta's head of global affairs, called the new initiative to detect artificially generated content “the most urgent task” facing the tech industry today.

On Tuesday, Mr. Clegg proposed a solution. Meta will encourage Technical standards Companies across the industry can use markers in photo, video and audio content to identify content that has been created using artificial intelligence.

The standards will allow social media companies to quickly identify AI-generated content posted on their sites and add a label to that content. If widely adopted, the standards could help identify AI-generated content from companies like Google, OpenAI, and Microsoft, Adobe, and Midjourney, which provide tools that allow people to create synthetic posts quickly and easily.

“Although it's not a perfect answer, we don't want the right to be the enemy of the good,” said Mr. Clegg said in an interview.

He added that he hopes the initiative will be a rallying cry for companies across the industry to adopt standards to detect and signal that content is artificial.

As the U.S. enters a presidential election year, industry observers believe AI tools will be widely used to post fake content to misinform voters. In the past year, people have used AI to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general's office in New Hampshire urged people not to vote in the recent primary. It is also investigating a series of robocalls that appeared to use Biden's AI-generated voice.

See also  The FBI seized the phones and iPads of New York City Mayor Eric Adams in a fundraising investigation

Senators Brian Schatz, Democrat of Hawaii, and John Kennedy, Republican of Louisiana, It proposed the law last October Organizations should disclose and label artificially generated content and work together to develop or use standards supported by Meta.

Meta, which owns Facebook, Instagram, WhatsApp and Messenger, is in a unique position as it leverages AI tools to advance the technology for wider consumer adoption, while also being the world's largest social network capable of distributing AI-generated content. Mr. According to Clegg, Meta's position provided specific insight into both the generation and distribution of the problem.

Meta consists of a series of technical specifications called IPTC And C2PA standards. This is information in the metadata of the content that indicates whether a piece of digital media is authentic. Metadata is the underlying information embedded in digital content that provides a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photos or videos.

Adobe and other technology and media companies have spent years developing Photoshop editing software Persuading their peers to adopt Created the C2PA standard Content Authenticity Initiative. The initiative is a joint effort by dozens of organizations, including the New York Times, to combat misinformation and “add damage-clear evidence to all types of digital content, from photos, video and documents,” the organization said.

Companies that provide AI creation tools can add standards to the metadata of videos, photos, or audio files they've helped create. Social networks such as Facebook, X (formerly Twitter) and YouTube will signal that such content is artificial when uploaded to their sites. Those companies can add labels indicating that these posts are AI-generated to inform users who view them on social networks.

See also  Georgia defeated TCU to win its second consecutive CFP Championship

Users who post meta and other AI content must label whether they have done so when uploading to companies' apps. Failure to do so carries penalties, although the companies did not detail what those penalties are.

Mr. Clegg said. Information and context about its source.

AI technology is advancing rapidly, which has prompted researchers to continue developing tools on how to spot fake content online. While companies like Meta, TikTok, and OpenAI have developed ways to detect such content, technologists have quickly found ways to circumvent those tools. Artificially generated video and audio AI has proven more challenging than photos.

(The New York Times is suing OpenAI and Microsoft for copyright infringement for using Times articles to train artificial intelligence systems.)

“Bad actors will always try to circumvent any standard we create,” said Mr. Clegg said. He described technology as a “sword and shield” for the industry.

Part of that difficulty stems from the fragmented nature of how tech companies approach it. Last fall, TikTok Announced a new policy AI YouTube requires its users to add labels to videos or photos created using it declared A similar attempt in November.

Meta's new project will try to tie some of those efforts together. Other business ventures like Partnerships in AIThey have brought together dozens of companies to discuss similar solutions.

Mr. Clegg also said he hopes companies will agree to participate in the standard, especially going into the presidential election.

“In this election year, we felt very strongly that it would not be justified to act before all the pieces of the jigsaw puzzle were in place,” he said.

See also  Here are the ultraprocessed foods you should avoid, according to 30 years of research

Leave a Reply

Your email address will not be published. Required fields are marked *