This episode currently has no reviews.
Submit ReviewWith the use of malicious AI on the rise, it’s hard to believe anything you read, hear, or see these days. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Malicious AI, Adversarial Attack, DeepFake, explain how these terms relate to AI and why it’s important to know about them.
With the use of malicious AI on the rise, it's hard to believe anything you read, hear, or see these days. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Malicious AI, Adversarial Attack, DeepFake, explain how these terms relate to AI and why it's important to know about them.
Malicious AIMalicious AI is the use of AI intentionally for criminal, unethical, dangerous, or other bad purposes. Unfortunately, there are many cases out there and this is only increasing. This episode goes over the terms and provides real world examples.
Adversarial Attacks and DeepFakesRelated to malicious AI is the idea of Adversarial Attacks. An adversarial attack is a malicious attack on machine learning systems through the use of maliciously designed input images that “trick” the ML model into predicting that the data, such as an image, is something different than what it is. A DeepFake is the use of Generative Adversarial Networks (GANs) and other forms of sophisticated deep learning neural networks to generate content used to trick or fake the viewer. It's especially used for manipulating images and videos to make it seem that a person is in the image or video when they are not in actuality.
Join Kathleen and Ron in this enlightening episode as they demystify these critical terms, elucidate their relevance in the AI landscape, and discuss the imperative for vigilance, trustworthy and ethical considerations, and proactive security measures in AI development.
Show Notes:
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review