Meta Introduces New AI Deepfake Strategy: More Labels, Fewer Takedowns
Meta has announced revisions to its regulations regarding AI-generated content and manipulated media following criticism from its Oversight Board. Effective next month, the company will expand its labeling of such content, including the application of a “Made with AI” badge to deepfakes. Moreover, additional contextual information will be provided when content has been manipulated in ways that present a high risk of misleading the public on significant matters.
This adjustment could result in more pieces of content being labeled, particularly pertinent given the numerous elections scheduled globally this year. However, for deepfakes, Meta will only affix labels when the content exhibits “industry standard AI image indicators” or when the uploader has disclosed its AI-generated nature.
AI-generated content falling outside these parameters may evade labeling. Consequently, Meta’s policy shift is expected to allow more AI-generated content and manipulated media to remain on its platforms, as it pivots towards an approach emphasizing transparency and additional context, rather than immediate removal, as the preferred method of addressing such content, acknowledging the associated risks to free speech.
For AI-generated or manipulated media on Meta platforms like Facebook and Instagram, the new approach appears to favor more labeling and fewer takedowns.
Meta also announced its intention to cease removing content solely based on its current manipulated video policy by July. This timeline allows users to become familiar with the self-disclosure process before the company stops removing the smaller subset of manipulated media.
This strategic shift may be a response to increased legal demands on Meta regarding content moderation and systemic risk, such as the European Union’s Digital Services Act. Since last August, this EU legislation has imposed regulations on its main social networks, requiring Meta to balance the removal of illegal content, mitigation of systemic risks, and protection of free speech. The EU is also applying additional pressure on platforms in light of the European Parliament elections in June, including urging tech giants to watermark deepfakes where feasible.
The upcoming U.S. presidential election in November is likely another factor influencing Meta’s decision-making.