
Westfield High School in New Jersey in May 2020. Photo: Rich Graessle/Getty Images
A phone, a few photos and artificial intelligence have stirred controversy and shattered the privacy of several teens at a New Jersey high school after they learned that nude images of them — created via AI — were circulated in group chats.
Why it matters: The incident is a poignant example of the threats that come with unregulated, expanding artificial intelligence access, experts told Axios.
Driving the news: Some teen girls at Westfield High School in New Jersey learned last month that fake nude images of them were shared among other students, the Wall Street Journal reported.
- Concerns were raised to administrators on Oct. 20, but the photos were shared over the summer, according to an email sent to parents by Westfield High's principal, Mary Asfendis.
- The investigation and response involved the Westfield Police Department, in addition to the school's resource officer, counseling department and administration.
Zoom out: Westfield is but one example of an issue all school districts are grappling with as the omnipresence of technology — including artificial intelligence — impacts students' lives, the district's superintendent Raymond González said in a statement.
- From January to September of this year, 54% more deepfake pornographic videos were uploaded to website hosts than in all of 2022, Wired reported.
- Earlier this year, students in New York State used AI to generate a video of a middle school principal making a racist rant, the Washington Post reported.
Context: Creating deepfakes used to require hundreds of thousands of images of a person, said Hany Farid, a professor at the University of California, Berkeley who has researched digital forensics and image analysis.
- One photo is now all it takes to create a deepfake, Farid said, and tools to create these types of images have also become more widely accessible.
- Some of them are usable without even inputting a user's personal information, like a phone number, he said.
- "What's frustrating is that the people developing these technologies either know or should have known that it's going to be used in this way and are not doing enough to prevent the abuse," he said.
- Anyone with a singular photo of themselves on the internet is susceptible to a deepfake getting created, Farid said.
State of play: Deepfake images have hurt people's reputations and have been used for extortion, and experts have been warning about the risks for years.
- "It's just going to get worse because the images are getting more realistic," Farid said.
What's next: Technology that determines whether an image is AI-generated already exists, said Edward Delp, a Purdue University professor of electrical and computer engineering.
- A future step is adding features that can detect deepfakes to widely used social media platforms and websites to prevent them from being uploaded.
- "The sad thing is that once it gets out there, there's going to be a lot of people who believe it's real," Delp said.
Yes, but: When such programs have been used in the past, though, non-sexual material such as breastfeeding guides has been filtered, he said.
Worth noting: Transgressions involving deepfakes are hard to prosecute, despite passed and proposed legislation that has targeted the sharing of explicit faked images and videos.
- In Westfield, families of four subjects with deepfakes made of them filed police reports, and a New Jersey senator has asked county prosecutors to look into the high school incident, per the WSJ.
Go deeper: How AI is helping scammers target victims in "sextortion" schemes