This episode currently has no reviews.
Submit ReviewDeepfake videos are spreading rapidly, fueled by sophisticated open-source AI models. MIT researchers reveal that most of these videos are non-consensual porn, targeting celebrities like Taylor Swift. But now even high school and middle school students, predominantly females, are being targeted. UCLA Professor John Villasenor joins The Excerpt to parse through the legislative and technological efforts to curb this surge of illicit content. We discuss the challenges of regulating AI-generated images, the importance of international cooperation, and offer practical advice for parents to protect their children from cyber sexual violence.
Episode Transcript is available here
Also available at art19.com/shows/5-Things
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Deepfake videos are spreading rapidly, fueled by sophisticated open-source AI models. MIT researchers reveal that most of these videos are non-consensual porn, targeting celebrities like Taylor Swift. But now even high school and middle school students, predominantly females, are being targeted. UCLA Professor John Villasenor joins The Excerpt to parse through the legislative and technological efforts to curb this surge of illicit content. We discuss the challenges of regulating AI-generated images, the importance of international cooperation, and offer practical advice for parents to protect their children from cyber sexual violence.
Episode Transcript is available here
Also available at art19.com/shows/5-Things
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review