Please login or sign up to post and edit reviews.
#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Dec 26, 2022
Episode Duration |
00:13:34

Support us! https://www.patreon.com/mlst

Alan Chan is a PhD student at Mila, the Montreal Institute for Learning Algorithms, supervised by Nicolas Le Roux. Before joining Mila, Alan was a Masters student at the Alberta Machine Intelligence Institute and the University of Alberta, where he worked with Martha White. Alan's expertise and research interests encompass value alignment and AI governance. He is currently exploring the measurement of harms from language models and the incentives that agents have to impact the world. Alan's research focuses on understanding and controlling the values expressed by machine learning models. His projects have examined the regulation of explainability in algorithmic systems, scoring rules for performative binary prediction, the effects of global exclusion in AI development, and the role of a graduate student in approaching ethical impacts in AI research. In addition, Alan has conducted research into inverse policy evaluation for value-based sequential decision-making, and the concept of "normal accidents" and AI systems. Alan's research is motivated by the need to align AI systems with human values, and his passion for scientific and governance work in this field. Alan's energy and enthusiasm for his field is infectious. 

This was a discussion at NeurIPS. It was in quite a loud environment so the audio quality could have been better. 

References:

The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future [Tim Chivers]

https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795

The implausibility of intelligence explosion [Chollet]

https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

Superintelligence: Paths, Dangers, Strategies [Bostrom]

https://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

A Theory of Universal Artificial Intelligence based on Algorithmic Complexity [Hutter]

https://arxiv.org/abs/cs/0004001

YT version: https://youtu.be/XBMnOsv9_pk 

MLST Discord: https://discord.gg/aNPkGUQtc5 

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review