More AI-generated child sex abuse material posted online
The amount of AI-generated child sexual abuse material (CSAM) posted online is increasing, a report published Monday found. The report, by the U.K.-based Internet Watch Foundation (IWF), highlights one of the darkest results of the proliferation of AI technology, which allows anyone with a computer and a little tech savvy to generate convincing deepfake videos. […] The post More AI-generated child sex abuse material posted online appeared first on Citinewsroom - Comprehensive News in Ghana.
The amount of AI-generated child sexual abuse material (CSAM) posted online is increasing, a report published Monday found.
The report, by the U.K.-based Internet Watch Foundation (IWF), highlights one of the darkest results of the proliferation of AI technology, which allows anyone with a computer and a little tech savvy to generate convincing deepfake videos.
Deepfakes typically refer to misleading digital media created with artificial intelligence tools, like AI models and applications that allow users to “face-swap” a target’s face with one in a different video. Online, there is a subculture and marketplace that revolves around the creation of pornographic deepfakes.
In a 30-day review this spring of a dark web forum used to share CSAM, the IWF found a total of 3,512 CSAM images and videos created with artificial intelligence, most of them realistic. The number of CSAM images found in the review was a 17% increase from the number of images found in a similar review conducted in fall 2023.
The review of content also found that a higher percentage of material posted on the dark web is now depicting more extreme or explicit sex acts compared to six months ago.
“Realism is improving. Severity is improving. It’s a trend that we wouldn’t want to see,” said Dan Sexton, the IWF’s chief technology officer.
Entirely synthetic videos still look unrealistic, Sexton said, and are not yet popular on abusers’ dark web forums, though that technology is still rapidly improving.
“We’ve yet to see realistic-looking, fully synthetic video of child sexual abuse,” Sexton said. “If the technology improves elsewhere, in the mainstream, and that flows through to illegal use, the danger is we’re going to see fully synthetic content.”
It’s currently much more common for predators to take existing CSAM material depicting real people and use it to train low-rank adaptation models (LoRAs), specialised AI algorithms that make custom deepfakes from even a few still images or a short snippet of video.
The current reliance on old footage in creating new CSAM imagery can cause persistent harm to survivors, as it means footage of their abuse is repeatedly given fresh life.
“Some of these are victims that were abused decades ago. They’re grown-up survivors now,” Sexton said of the source material.
The rise in the deepfaked abuse material highlights the struggle regulators, tech companies and law enforcement face in preventing harm.
Last summer, seven of the largest AI companies in the U.S. signed a public pledge to abide by a handful of ethical and safety guidelines. But they have no control over the numerous smaller AI programmes that have littered the internet, often free to use.
—-
Explore the world of impactful news with CitiNewsroom on WhatsApp!
Click on the link to join the Citi Newsroom channel for curated, meaningful stories tailored just for YOU: https://whatsapp.com/channel/0029VaCYzPRAYlUPudDDe53x
No spam, just the stories that truly matter! #StayInformed #CitiNewsroom #CNRDigital
The post More AI-generated child sex abuse material posted online appeared first on Citinewsroom - Comprehensive News in Ghana.