Smiley face
Weather     Live Markets

Meta, the owner of Facebook, Instagram, WhatsApp, and Threads, has announced plans to expand efforts to label content that has been manipulated or generated by artificial intelligence. This move comes in response to growing concerns about the potential risks posed by AI-generated content, with Meta’s platforms joining others like YouTube and TikTok in addressing this issue. The company intends to label content such as videos, audio, and images as “Made with AI” when its systems detect AI involvement, or when creators disclose it during an upload. They may also add a more prominent label to content with a high risk of deceiving the public.

The decision to label AI-generated content comes as the tech industry grapples with the increasing sophistication of AI technology, which can produce lifelike videos and audio. Concerns have been raised about the potential for AI-generated disinformation to mislead the public, as seen in a recent incident where a political consultant used AI to create robocalls mimicking President Joe Biden’s voice. With the 2024 presidential election on the horizon, experts predict that more AI-driven disinformation campaigns may be on the way. Meta’s efforts to label AI-generated content reflect the industry’s recognition of the need to address this issue.

Meta is not the only social media company taking steps to identify and label AI-powered content. TikTok has announced plans to launch a tool to help creators label manipulated content and prohibits “deepfakes” that mislead viewers about real events or people. Google’s YouTube subsidiary has also begun requiring disclosure of AI-manipulated videos from creators in an effort to combat the spread of misleading content. Meta has indicated that it will enforce its rules on AI-generated content, citing a survey that found strong public support for labels on content depicting people saying things they did not say.

As part of its efforts to increase transparency around AI-generated content, Meta conducted a survey with over 23,000 respondents in 13 countries, in which 82% favored labels on AI-generated content that misrepresents individuals. The company’s commitment to removing content that violates its policies against voter interference, bullying, harassment, violence, and incitement underscores its dedication to addressing the potential harms of AI-generated content. In light of the growing concerns surrounding the spread of AI-generated disinformation, Meta’s decision to label such content represents a step towards safeguarding the public from misleading and deceptive content online.

In conclusion, Meta’s announcement to expand efforts to label AI-generated content reflects the broader industry response to the growing challenges posed by the proliferation of AI technology. By providing users with greater transparency and context around content created with AI, Meta aims to empower individuals to better assess the information they encounter online. As concerns about the impact of AI-generated disinformation continue to mount, initiatives like Meta’s labeling efforts play a crucial role in promoting a safer and more informed online environment.

Share.
© 2024 Globe Echo. All Rights Reserved.