Meta, the parent company of Facebook and Instagram, announced a new initiative that will impact political advertisements on their platforms. Starting in the new year, any political ad purchase will have to disclose any AI-generated images it uses. This move, part of a worldwide policy rollout that lacks a specific launch date, aims to increase transparency in political campaigning.
In a similar vein, Microsoft revealed a plan that includes a digital watermarking tool for campaign ads, which aims to validate the origin of the ads and prevent unauthorized alterations.
The use of AI has raised concerns about the potential for creating and spreading misinformation through lifelike synthetic media, including deepfakes. Such technology poses a risk of deceiving voters on an unprecedented scale, prompting criticism of tech companies for not sufficiently countering this threat.
In response, Meta’s announcement, coinciding with a House hearing on deepfakes, introduces measures to help users identify AI-generated content. However, this may not fully address broader concerns. In the U.S., regulatory efforts are underway, with the Federal Election Commission considering rules for AI in political ads and President Biden’s recent executive order promoting responsible AI development.
Congress is also considering legislation, with Rep. Yvette Clarke advocating for mandatory AI labels on ads and criminalizing unlabeled deepfakes that incite violence or depict sexual content. Clarke commended Meta and Microsoft’s efforts but underscored the need for stronger legislative safeguards against AI-fueled disinformation.
Experts, like AI developer Vince Lynch, suggest a combined approach of federal regulation and voluntary tech company policies to protect the public from AI-generated disinformation.
Meta’s policy will require disclosure for any ad featuring realistic AI-altered images of people or events. However, minor edits, like resizing or sharpening an image, won’t necessitate such disclosure. This may introduce a loophole or gray area that could be exploited.
Non-compliant content will be removed, and ad details will be available in Facebook’s ad library.
Google has introduced a similar AI labeling policy, affecting political ads on YouTube and other platforms. These tech industry measures come against a backdrop of warnings from Microsoft about foreign nations, particularly Russia, Iran, and China, leveraging AI to disrupt elections, an issue that has been observed since at least July 2023.
Overall, the collective push from Meta, Microsoft, lawmakers, and the Biden administration reflects a growing recognition of the need to balance AI innovation with safeguards to uphold the integrity of democratic processes.