Smiley face
Weather     Live Markets

The administration of President Joe Biden is calling on the tech industry to shut down a market of abusive sexual images created using artificial intelligence technology. AI tools can easily generate realistic, sexually explicit images that are shared online, especially targeting women and girls. The White House is seeking voluntary cooperation from companies to develop measures to prevent the creation and spread of nonconsensual AI images, including those depicting children. The administration is urging AI developers, payment processors, and major companies like Apple and Google to take action to disrupt the monetization of image-based sexual abuse.

The release of sexually explicit deepfakes created with generative AI has become a significant concern for the Biden administration, particularly as it affects women and girls who are often the targets of such abusive images. Notable figures like Taylor Swift have been victims of nonconsensual AI imagery, prompting tech giants like Microsoft to enhance safeguards around their AI systems. Despite voluntary commitments from major technology companies to implement safeguards on new AI systems, there remains a need for comprehensive legislation to address the issue of AI-generated child abuse imagery and ensure public safety is not compromised.

A bipartisan group of U.S. senators is advocating for increased funding for artificial intelligence development and measures to ensure its safe application. Legislation is needed to enforce existing laws that criminalize the creation and possession of sexual images of children, even if they are digitally altered. Federal prosecutors have recently charged an individual for using an AI tool to produce thousands of AI-generated images depicting minors engaged in sexual conduct. The lack of oversight over the technology that enables the creation of such images remains a pressing issue, as it allows for their proliferation on the internet.

The emergence of AI-generated deepfake nudes targeting students in schools highlights the urgent need for regulatory measures to address the misuse of AI technology. While companies have made voluntary commitments to safeguard the development of new AI systems, legislative action is essential to establish rules and regulations to protect against the proliferation of nonconsensual AI imagery. The White House Gender Policy Council emphasizes the importance of Congress taking action to address the underlying issues contributing to the spread of abusive sexual images created with AI technology.

The AI technology industry must collaborate with financial institutions, cloud computing providers, search engines, and mobile app stores to combat the dissemination of nonconsensual AI images, particularly those depicting explicit content involving minors. Companies should uphold terms of service that prohibit support for businesses that promote abusive imagery and commit to disrupting the monetization of image-based sexual abuse. Cloud service providers and mobile app stores play a crucial role in restricting access to web services and applications designed to create or alter sexual images without consent. By working together, the private sector can help prevent the spread and exploitation of abusive AI-generated imagery.

Share.
© 2024 Globe Echo. All Rights Reserved.