AI Alignment & AGI Fire Alarm - Connor Leahy
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Nov 01, 2020
Episode Duration |
02:04:35

This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI.

Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder problems, even shorter deadline and even worse consequences for the future. These problems are hard, and nobody knows what to do about them.

00:00:00 Introduction to AI alignment and AGI fire alarm 

00:15:16 Main Show Intro 

00:18:38 Different schools of thought on AI safety 

00:24:03 What is intelligence? 

00:25:48 AI Alignment 

00:27:39 Humans dont have a coherent utility function 

00:28:13 Newcomb's paradox and advanced decision problems 

00:34:01 Incentives and behavioural economics 

00:37:19 Prisoner's dilemma 

00:40:24 Ayn Rand and game theory in politics and business 

00:44:04 Instrumental convergence and orthogonality thesis 

00:46:14 Utility functions and the Stop button problem 

00:55:24 AI corrigibality - self alignment 

00:56:16 Decision theory and stability / wireheading / robust delegation 

00:59:30 Stop button problem 

01:00:40 Making the world a better place 

01:03:43 Is intelligence a search problem? 

01:04:39 Mesa optimisation / humans are misaligned AI 

01:06:04 Inner vs outer alignment / faulty reward functions 

01:07:31 Large corporations are intelligent and have no stop function 

01:10:21 Dutch booking / what is rationality / decision theory 

01:16:32 Understanding very powerful AIs 

01:18:03 Kolmogorov complexity 

01:19:52 GPT-3 - is it intelligent, are humans even intelligent? 

01:28:40 Scaling hypothesis 

01:29:30 Connor thought DL was dead in 2017 

01:37:54 Why is GPT-3 as intelligent as a human 

01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table 

01:50:28 AI ethics related to AI alignment? 

01:53:26 Interpretability 

01:56:27 Regulation 

01:57:54 Intelligence explosion 

Discord: https://discord.com/invite/vtRgjbM

EleutherAI: https://www.eleuther.ai

Twitter: https://twitter.com/npcollapse

LinkedIn: https://www.linkedin.com/in/connor-j-leahy/

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review