Deepfakes, or fake AI-generated porn, have become a terrifying reality for students and have resulted in actual incidences of harassment, which is a troubling trend that is affecting high schools across the United States. In October of last year, a series of incidents including compromising photographs of Ellis, then 14 years old, surfacing on social media, occurred in Texas. Ellis woke up to a deluge of calls and texts.
Ellis was a victim of the unsettling developments in deepfake technology, even though she did not take the images herself. Artificial intelligence was used to generate these deepfakes, which were cleverly stolen photos from Instagram and used explicit, digitally altered bodies to overlay the faces of unsuspecting victims. After that, peers started sharing the incredibly lifelike composite images on social media sites like Snapchat, which caused actual distress and dread in the targeted children.
“As AI has grown in popularity, so has deepfake pornography,” writes Ellis’ mother, Anna Berry McAdams, who expressed surprise at the images’ lifelike quality. School administrators are finding it difficult to handle the growing problem due to the lack of federal law to stop this behavior, which has serious repercussions for the victims.
A comparable controversy surfaced at the end of the month at another New Jersey high school, highlighting the increasing frequency of harassment associated with deepfake accounts. The mother of the victim in this case, 14-year-old Dorota Mani, notes that these kinds of incidents are likely to happen more often and stresses how challenging it is to identify these pornographic deepfakes without the victims’ knowledge.
Experts contend that because the law has not kept up with the swift advances in technology, people are more susceptible to the harmful use of deepfake pornography. Even people who have mistakenly uploaded pictures on the internet, like headshots from LinkedIn, run the risk of becoming gullible victims. The University of California, Berkeley’s Hany Farid, a professor of computer science, highlights the critical need for legislative measures to prevent the production and dissemination of non-consensual intimate pictures.
Although the problem is addressed by President Joe Biden’s recent executive order on AI, deepfake pornography is not the subject of any national regulation at this time. Few states have passed legislation to control this alarming trend.
Renee Cummings, an AI ethicist, questions the applicability of current laws to this type of digital exploitation by pointing out the legal conflicts resulting from the usage of deepfakes. The potential for widespread harm is highlighted by the ease with which anyone with a smartphone and minimum resources may produce such photographs, particularly for the young women and girls who are the major targets.
Deepfake victimization can have long-term effects like anxiety, depression, and post-traumatic stress disorder in addition to its initial emotional discomfort. Ellis is still experiencing anxiety and has asked to transfer schools, even though the student who took her pictures has been temporarily suspended.
As the cases unfold, it becomes clear that comprehensive laws to combat deepfake pornography are urgently needed. In the lack of explicit rules, victims and their families are left in a precarious situation where they cannot predict when or if these doctored photographs may surface again and affect them for years to come.