Smiley face
Weather     Live Markets

A Wisconsin man has been charged with sharing AI-generated child sexual abuse material with a minor in one of the first criminal cases involving AI-generated CSAM. The man, Steven Anderegg, allegedly used the Stable Diffusion 1.5 AI-powered image generator to create explicit images of young children and shared them with a 15-year-old he met on Instagram. The images were flagged by Instagram parent company Meta to authorities, leading to Anderegg’s arrest and charges of exposing a child to harmful material and sexual contact with a child under age 13. He pleaded not guilty and was released on bail.

Stability AI, the company now managing Stable Diffusion, stated that the images were likely created with version 1.5 of the software, which was developed by AI startup Runway ML in October 2022. Since then, Stability AI has taken proactive measures to prevent the misuse of AI for harmful content production, including investing in features to make it harder for bad actors to misuse the platform. However, older versions of Stable Diffusion are still freely circulating online, making it accessible for individuals to use for creating illegal content.

Tech companies are required by federal law to report any instances of CSAM, whether real or AI-generated, to the National Center for Missing and Exploited Children’s CyberTipline for review and referral to law enforcement. However, there have been concerns raised about the ability of generative AI platforms to detect and report such content effectively. Stability AI and other companies in the field claim to have built-in protections to prevent the commission of harmful or illegal activities, including the creation of pornographic content without consent.

Stability AI’s release of Stable Diffusion 2.0 in November 2022 garnered some backlash due to increased restrictions on explicit content. The company implemented these changes in response to previous versions being used to create images of child abuse. Researchers at Stanford University found that Stable Diffusion 1.5 was trained on illegal child sexual abuse material, highlighting the challenges of addressing the repercussions of AI-generated CSAM. The company has not yet registered with NCMEC to report any instances of CSAM on their platform, but they have expressed their commitment to doing so in the future.

This case in Wisconsin is part of a concerning trend where AI tools are being misused to create illegal sexual abuse material. In a recent Congressional hearing, NCMEC reported 4,700 cases of AI-generated CSAM in 2023, raising concerns about the potential for further increases. Legal complexities exist in prosecuting individuals for purely AI-generated CSAM, but experts believe that cases like the one in Wisconsin could provide a roadmap for prosecutors to charge suspects with other crimes related to the creation and distribution of harmful material. The issue of AI-generated CSAM poses new challenges for law enforcement and technology companies in combating online exploitation.

Share.
© 2024 Globe Echo. All Rights Reserved.