Weather     Live Markets

The Oversight Board for Meta, previously known as Facebook, is tasked with evaluating the company’s handling of deepfake pornography, a growing concern as artificial intelligence makes it easier to create fake, explicit images for harassment purposes. The board is specifically focusing on how Meta addressed two instances of AI-generated images of female public figures, one from the United States and one from India, to determine if the company has appropriate policies in place to combat this issue and if those policies are being consistently enforced globally.

The rise of AI-generated pornography has become a significant issue, with notable individuals like Taylor Swift and women around the world falling victim to this form of online abuse. Generative AI tools have made it faster, easier, and cheaper to create fake images, and social media platforms play a role in spreading these images rapidly. Deepfake pornography is increasingly being used to target, silence, and intimidate women both online and offline, making it a growing cause of gender-based harassment.

The Oversight Board, consisting of experts in areas like freedom of expression and human rights, functions as a kind of Supreme Court for Meta, allowing users to appeal content decisions on the company’s platforms. As part of its review, the board will assess the handling of AI-generated nude images resembling public figures from India and the US that were shared on Instagram and Facebook, respectively. In one instance, a user reported a pornographic image, but Instagram failed to review it within 48 hours, prompting automatic closure of the report. The Oversight Board’s intervention led to the image’s removal for violating bullying and harassment rules.

In another case, an AI-generated image of a nude woman being groped was posted to a Facebook group, meant to resemble an American public figure. The image was removed for violating bullying and harassment rules after being escalated to policy experts, and it was added to a photo matching bank to automatically detect reposts of rule-breaking images. The board is seeking public comments on deepfake pornography for this review, including its harmful effects on women and Meta’s response to posts featuring AI-generated explicit imagery. The public comment period closes on April 30th.

Helle Thorning-Schmidt, Co-Chair of the Meta Oversight Board and former prime minister of Denmark, emphasized the need for Meta to protect all women globally in a fair manner, pointing out disparities in content moderation effectiveness across markets and languages. The board’s review will help determine if Meta is adequately safeguarding women from deepfake pornography and gender-based harassment online. By evaluating specific cases from different regions and addressing inconsistencies in policy enforcement, the board aims to ensure a more uniform and effective approach to content moderation across Meta’s platforms.

Meta’s Oversight Board plays a crucial role in holding the company accountable for its content moderation practices, making recommendations on specific cases and broader policy suggestions. The increased use of generative AI tools for creating deepfake pornography underscores the urgency of addressing this issue and ensuring that women are protected from online harassment. The board’s examination of Meta’s response to AI-generated explicit imagery involving public figures from different countries will provide valuable insights into the company’s efforts to combat deepfake pornography and promote a safer online environment for all users.

Share.
Exit mobile version