Sexual predators are exploiting children with the assistance of powerful new tools and AI image generators. In just one month, users of a single dark web forum uploaded and distributed almost three thousand AI-generated images depicting child sexual abuse, according to a recent report by the Internet Watch Foundation of the United Kingdom.
Unfortunately, current laws regarding juvenile sexual abuse are obsolete. Their consideration of the distinct risks that AI and other emergent technologies present is insufficient. Legislators must act swiftly to establish legal protections.
Incredibly, the number of reports to the national CyberTipline, which monitors suspected online child exploitation, increased from 21 million in 2020 to 32 million in 2022. This already unsettling number will almost certainly increase as image-generating AI platforms increase in numbers. AI platforms have been “trained” using visual content that already exists. Photographs of actual exploitation or the features of real children extracted from social media may be utilized as sources to generate images of abuse. The abundance of abusive images on the internet—tens of millions—provides an almost infinite supply of source material with which artificial intelligence can create even more harmful photos.
Presently, the most sophisticated AI-generated images can hardly be differentiated from unaltered photographs. New images of elderly victims, “de-aged” celebrities presented as children in assault scenarios, and “nudified” images extracted from otherwise benign photographs of clothed children have been discovered by investigators. The magnitude of the issue is progressively expanding daily. Text-to-image software can generate child abuse images with relative ease, using the perpetrator’s desired content as inspiration. Furthermore, a significant portion of this technology is available for download, enabling wrongdoers to produce images while offline, shielded from detection.
Creating images of child sex abuse using artificial intelligence is not a crime that has no victims. Physical children reside behind each AI image. Those who have survived past abuses are re-victimized whenever their look is used to create new depictions. According to studies, the majority of individuals who possess or distribute materials depicting child sexual abuse also engage in physical abuse. Adults can also update an age-old strategy by utilizing text-generating AI platforms such as ChatGPT to tempt children more effectively. Fraudsters have historically employed fictitious online personas to meet young people on social media or in gaming environments, acquire their trust, and persuade them into sending explicit images; they then “sextort” the victims for money, additional images, or physical acts.
Yet ChatGPT remarkably simplifies the process of impersonating a juvenile or adolescent by employing vulgar language. Today’s criminals can manipulate young people into participating in an online interaction with a person they believe to be the same age by generating realistic messages using AI platforms. Worrisomely, numerous contemporary AI tools possess the capability to rapidly “learn” and thus instruct individuals on the most effective grooming techniques.
Recently, President Biden issued an executive order to manage the hazards of artificial intelligence, including the protection of the privacy and personal information of Americans. However, legislative assistance is required to combat AI-assisted online child maltreatment.
To begin with, the federal legal description of material that depicts child sexual abuse must be updated to include depictions generated by artificial intelligence. Presently, the law requires prosecutors to establish damage inflicted on a physical child. This requirement, however, is obsolete in light of modern technology. A defense team might reasonably assert that child sexual abuse content generated by AI does not depict an actual child and is, therefore, harmless, even though it is common knowledge that such content is frequently extracted from sources that victimize real children.
Furthermore, policies mandating that technology companies consistently track and report exploitative content must be implemented. While certain organizations choose to actively search for such images, doing so is not mandatory. Twitter, Facebook, and Google accounted for 98% of the total CyberTips in 2020 and 2021, respectively.
Numerous state laws about child sex abuse designate “mandatory reporters,” which include medical personnel and educators who are obligated by law to disclose any suspicions of abuse. However, in a time when the majority of our lives are spent online, social media and other technology company personnel should have comparable legally required reporting obligations.
Ultimately, we must reconsider our approach to end-to-end encryption, which restricts access to the contents of a message or file to the sender and receiver only. Although end-to-end encryption finds utility in sectors such as banking and medical records, it can also facilitate the storage and exchange of child abuse images. Consider that Apple, which supports end-to-end encryption for iMessage and iCloud, contributed only 160 of the 29 million tips that CyberTipline received in 2021. This number serves to illustrate the extent to which perpetrators may remain undetected.
In the event that law enforcement possesses a warrant to access the files of the perpetrator, a technology company employing end-to-end encryption may assert its inability to access said files and, therefore, be of no assistance. Undoubtedly, an industry founded on innovation possesses the capacity to devise solutions that prioritize the protection of our children. The development of AI technology and social media occurs daily. If legislators take immediate action, we can avert extensive harm to children