This episode currently has no reviews.
Submit ReviewAs large language models like ChatGPT play an increasingly important role in our society, there will no doubt be examples of them causing harm. Lawsuits have already been filed in cases where LLMs have made false statements about individuals, but what about run-of-the-mill negligence cases? What happens when an LLM provides faulty medical advice or causes extreme emotional distress?
A forthcoming symposium in the Journal of Free Speech Law tackles these questions, and Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with three of the symposium's contributors at the University of Arizona and the University of Florida: law professors Jane Bambauer and Derek Bambauer, and computer scientist Mihai Surdeanu. Jane's paper focuses on what it means for a LLM to breach its duty of care, and Derek and Mihai explore under what conditions the output of LLMs may be shielded from liability by that all-important Internet statute, Section 230.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review