Google Search Revamps Content Removal and Ranking Policies to Tackle Explicit Deepfake Media

Once an explicit deepfake is removed, Google’s algorithms will block related explicit content from appearing in future searches

Google Search has updated its removal processes and ranking systems to combat non-consensual explicit imagery, commonly referred to as deepfakes. These deepfakes, generated using artificial intelligence (AI), have become a growing concern as cybercriminals target individuals, including celebrities and influencers, with fake explicit content.

On Wednesday, Google introduced new measures aimed at swiftly removing these explicit deepfakes from its search results and demoting websites that host such harmful material. Google Search Takes Action Against Explicit Deepfakes

In a blog post, Google highlighted its increased focus on addressing the rise in deepfake content. The company’s new strategy includes a more streamlined process for individuals to request the removal of non-consensual explicit deepfakes, as well as improvements in Google’s search algorithms. By demoting sites that contain deepfake content, the search giant aims to make it more difficult for bad actors to distribute these materials.

 

 

Google’s response comes as deepfake technology becomes more accessible, and its potential for misuse grows. AI-generated images and videos can convincingly depict people in fabricated explicit situations, which can have damaging consequences for the victims.

“These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future,” Google said.

The tech giant has also updated the ranking systems. Whenever explicit deepfakes are requested on Google Search using a specific query, the company will aim to show high-quality, non-explicit content instead. The post highlighted that the technique can reduce exposure to fake explicit content by as much as 70 percent. These users will now see how deepfakes are impacting society instead of seeing non-consensual fake images and videos.