You are currently viewing Child Sexual Abuse Material – The Legal Challenges of AI-Generated CSAM

Child Sexual Abuse Material – The Legal Challenges of AI-Generated CSAM

The rapid advancement of artificial intelligence (AI) technology has brought about enormous potential to better our lives, however, it also has unique challenges and dangers, particularly in the realm of combating child sexual abuse material (CSAM). In South Africa, as in many other countries, the emergence of AI-generated CSAM has raised complex legal questions that demand careful consideration.

What is AI-generated CSAM

AI image generators are capable of producing compelling art, very realistic photographs, and outlandish designs which we have all found extremely fun and entertaining. It also provides a whole new kind of creativity with the promise to change art forever. Most of us have used an AI image generator at some point so we are all aware that it can create very convincing fakes. The systems are trained on huge volumes of existing images, often scraped from the web with and sometimes without permission, and allow images to be created from simple text prompts.

Another term that is often used when referring to AI-generated images is “deepfake”. Deepfake is a type of artificial intelligence that is utilized to create fake images, audio, and even videos.

Deepfakes transforms existing source content where one person is swapped for another, however, it can also create original content where someone is represented doing or saying something that they did not do or say. Therefore, deepfake videos are created in one of two ways. Firstly, they can use an original video source of the target, where the person is made to say and do things they never did. Secondly, they can swap the person’s face onto a video of another individual, also known as a face swap.

This technology is, however, also being used by criminals to create thousands of new images of children who have previously been abused or who have never been abused. Offenders are sharing datasets of images wherein children are sexually abused that can then be used to customize AI models. Some have even started to sell monthly subscriptions to AI-generated child sexual abuse material (CSAM).

Typically, these offenders are using openly available generative AI models. They generally fine-tune older versions of AI models to create illegal material of children. This process would involve feeding a model existing images containing some form of sexual abuse or images of people’s faces, which then allows AI to create images of specific individuals.

The Report of the Internet Watch Foundation’s Investigation into CSAM

The Internet Watch Foundation is an independent, non-profit charitable organization in the UK that dedicates its time to assessing images and videos of children around the world suffering sexual abuse. In 2023, they investigated its first reports of child sexual abuse material (CSAM) generated by AI.

The initial investigation uncovered a world of text-to-image technology where you type whatever it is you want to see into online generators and the software generates the image. You can then pick out your favorites; edit them; and direct the technology to output exactly what you want.

Some key findings of this report are as follows:

  • In a one-month period, 20,254 AI-generated images were found to have been posted to a single dark web CSAM forum.
  • 11,108 of these images were then selected for assessment by IWF analysts. (The remaining 9,146 images either did not contain children or contained children but were not criminal.)
  • 2,562 images were found to be criminal pseudo-photographs, and 416 were found to be illegal images.
  • Most of these AI-generated CSAM were realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM.
  • The AI-generated images include the rape of babies and toddlers, famous preteen children being abused, as well as BDSM content featuring teenagers.
  • Demands, discussions, and actual examples were also found on the forum.

The stance in South Africa

AI-generated CSAM introduces new challenges and the traditional understanding of criminal responsibility is certainly not sufficient to address the nuances of content created without direct human involvement. Traditional legislation often relies on human intent and action, making it necessary to adapt existing laws to encompass acts carried out by AI algorithms. The question of whether the programmer, the owner of the AI system, or the AI itself should bear legal responsibility becomes crucial.

South Africa is not nearly equipped or coordinated enough to deal with this crime effectively. That being said, South Africa does have the Films and Publications Amendment Act, 2019 that criminalizes the creation, possession, and distribution of explicit material involving minors. The Films and Publication Board is mandated to curb the creation, distribution, and possession of child pornography.

The Film and Publication Board

The Film and Publication Board played a crucial role in the gathering of evidence for the state in their case against Gerhard Ackermand who was found guilty on over 700 charges which included child trafficking, sexual assault, and possession of child sexual abuse material (CSAM). On the 14th of August 2023, the High Court in Johannesburg sentenced Gerhard Ackerman to 12 life sentences.

The Child Protection Officers at the Films and Publication Board have been trained in Safety and Risk Assessment in the field of Child Protection and they are Certified Content Analysts. However, the rapid changes in technology have made the work of the Films and Publication Board particularly challenging. Sexual predators find an anonymous home on the internet where it is easy to build a persona that is very different from reality and can be used to exploit others.

The Film and Publication Board reported that it received nine cases of child sexual abuse material (CSAM) from the public through its hotline or by direct e-mail in 2020/21, and 23 CSAM cases were referred to it by family and child protection services. Out of these cases, a total of 733,810 images were examined, of which 27,174 (3.7%) were found to constitute CSAM. One case received from the Western Cape contained 417 DVDs with video footage of suspected child pornography.

Detecting child sexual abuse material (CSAM) is not easy. While several tools can automatically detect this type of content, those tools rely on content that has been found and marked as child sexual abuse material (CSAM) and human moderators need to fill in the gaps that these tools can’t fill.

Conclusion

“Look into the eyes of a child who has been sexually abused and you’ll see pain, a pain that endures long after the bruises have healed. This pain is compounded by child molesters who create images of the sexual abuse and share them with other child molesters. They trade them in chat rooms and post them on thousands of websites. These people are making money from the pain of children” (Royal Newfoundland Constabulary Association, 2007).

The Film and Publication Board has a Hotline dedicated to allowing the public an opportunity to report any suspected child sexual abuse material they come across. This Hotline is also dedicated to providing telephonic psychosocial support. It also forms part of INHOPE (International Hotline Association), which allows hotlines across the world to report CSAM worldwide

Leave a Reply