#034 Eray Özkural- AGI, Simulations & Safety
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Dec 20, 2020
Episode Duration |
02:39:09

Dr. Eray Ozkural is an AGI researcher from Turkey, he is the founder of Celestial Intellect Cybernetics. Eray is extremely critical of Max Tegmark, Nick Bostrom and MIRI founder Elizier Yodokovsky and their views on AI safety. Eray thinks that these views represent a form of neoludditism and they are capturing valuable research budgets with doomsday fear-mongering and effectively want to prevent AI from being developed by those they don't agree with. Eray is also sceptical of the intelligence explosion hypothesis and the argument from simulation.

Panel -- Dr. Keith Duggar, Dr. Tim Scarfe, Yannic Kilcher

00:00:00 Show teaser intro with added nuggets and commentary

00:48:39 Main Show Introduction 

00:53:14 Doomsaying to Control  

00:56:39 Fear the Basilisk!  

01:08:00 Intelligence Explosion Ethics  

01:09:45 Fear the Automous Drone! ... or spam  

01:11:25 Infinity Point Hypothesis  

01:15:26 Meat Level Intelligence 

01:21:25 Defining Intelligence ... Yet Again  

01:27:34 We'll make brains and then shoot them 

01:31:00 The Universe likes deep learning 

01:33:16 NNs are glorified hash tables 

01:38:44 Radical behaviorists  

01:41:29 Omega Architecture, possible AGI?  

01:53:33 Simulation hypothesis 

02:09:44 No one cometh unto Simulation, but by Jesus Christ  

02:16:47 Agendas, Motivations, and Mind Projections  

02:23:38 A computable Universe of Bulk Automata 

02:30:31 Self-Organized Post-Show Coda 

02:31:29 Investigating Intelligent Agency is Science 

02:36:56 Goodbye and cheers!  

https://www.youtube.com/watch?v=pZsHZDA9TJU

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review