Please login or sign up to post and edit reviews.
Rethinking the lifecycle of AI when it comes to deepfakes and kids
Publisher |
Marketplace
Media Type |
audio
Categories Via RSS |
Business
News
Publication Date |
May 06, 2024
Episode Duration |
00:09:36

The following content may be disturbing to some listeners.

For years, child sexual abuse material was mostly distributed by mail. Authorities used investigative techniques to stem its spread. That got a lot harder when the internet came along. And AI has supercharged the problem.

“Those 750,000 predators that are online at any given time looking to connect with minor[s] … they just need to find a picture of a child and use the AI to generate child sexual abuse materials and superimpose these faces on something that is inappropriate,” says child safety advocate and TikTokker Tiana Sharifi.

The nonprofit Thorn has created new design principles aimed at fighting child sexual abuse. Rebecca Portnoff, the organization’s vice president of data science, says tech companies need to develop better technology to detect AI-generated images and commit not to use this material to train AI models.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review