Last week, we were honored to have Professor Geoff Hinton join the show for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey in academia, as well as past 10 years with Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the AlexNet/Abstract.html">ImageNet breakthrough moment; the purpose of sleep; and why it’s better to grow our computers than manufacture them.
As you might recall, we also gave our audience an opportunity to contribute questions for Geoff via Twitter. We received so many amazing questions from our audience that we had to break down our time with Geoff into two parts! In this episode, we’ll discuss some of these questions with Geoff.
Tune in to get Geoff’s answers to the following questions AND MORE:
Are you concerned with AI becoming too successful?
What is the connection between mania and genius?
What childhood experiences shaped him the most?
What is next in AI?
What should PhD students focus on?
How conscious do you think today's neural nets are?
How important is embodiment for intelligence?
How does the brain work?
SUBSCRIBE TO THE ROBOT BRAINS PODCAST TODAY | Visit therobotbrains.ai and follow us on YouTube at TheRobotBrainsPodcast, Twitter @therobotbrains, and Instagram @therobotbrains.
Hosted on Acast. See acast.com/privacy for more information.