This episode currently has no reviews.
Submit ReviewRisks associated with the rapid development and deployment of artificial intelligence are getting the attention of lawmakers. But one issue that may not be getting adequate attention by policymakers or by the AI research and cybersecurity communities is the vulnerability of many AI-based systems to adversarial attack. A new Stanford and Georgetown report, “prod.s3.us-west-1.amazonaws.com/s3fs-public/2023-04/adversarial_machine_learning_and_cybersecurity_v7_pdf_1.pdf">Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications,” offers a stark a reminder that security risks for AI-based systems are real and recommends actions that developers and policymakers can take to address the issues.
Lawfare Senior Editor Stephanie Pell sat down with two of the report’s authors, Jim Dempsey, Senior Policy Advisor for the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, and Jonathan Spring, Cybersecurity Specialist at the Cybersecurity Infrastructure Security Agency (CISA). They talked about how AI-based systems are vulnerable to attack, the similarities and differences between vulnerabilities in AI-based systems and traditional software vulnerabilities, and how some of the challenges and problems with AI security may be social as much as they are technological.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review