Please login or sign up to post and edit reviews.
BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?
Podcast |
Brain Inspired
Publisher |
Paul Middlebrooks
Media Type |
audio
Categories Via RSS |
Education
Natural Sciences
Science
Technology
Publication Date |
Jan 19, 2021
Episode Duration |
01:25:28

It's generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam's previous appearance on the podcast.

0:00 - Intro 5:00 - Good ol' days 13:50 - AI for neuro, neuro for AI 24:25 - Intellectual diversity in AI 28:40 - Role of philosophy 30:20 - Operationalization and benchmarks 36:07 - Prediction vs. understanding 42:48 - Role of humans in the loop 46:20 - Value alignment 51:08 - Andrew Saxe question 53:16 - Explainable AI 58:55 - Generalization 1:01:09 - What has AI revealed about us? 1:09:38 - Neuro for AI 1:20:30 - Concluding remarks

It's generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam's previous appearance on the podcast. Chris's lab: Human Information Processing lab.Sam's lab: Computational Cognitive Neuroscience Lab.Twitter: @gershbrain; @summerfieldlabPapers we discuss or mention or are related:If deep learning is the answer, then what is the question?Neuroscience-Inspired Artificial Intelligence.Building Machines that Learn and Think Like People. 0:00 - Intro 5:00 - Good ol' days 13:50 - AI for neuro, neuro for AI 24:25 - Intellectual diversity in AI 28:40 - Role of philosophy 30:20 - Operationalization and benchmarks 36:07 - Prediction vs. understanding 42:48 - Role of humans in the loop 46:20 - Value alignment 51:08 - Andrew Saxe question 53:16 - Explainable AI 58:55 - Generalization 1:01:09 - What has AI revealed about us? 1:09:38 - Neuro for AI 1:20:30 - Concluding remarks

It's generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam's previous appearance on the podcast.

0:00 - Intro 5:00 - Good ol' days 13:50 - AI for neuro, neuro for AI 24:25 - Intellectual diversity in AI 28:40 - Role of philosophy 30:20 - Operationalization and benchmarks 36:07 - Prediction vs. understanding 42:48 - Role of humans in the loop 46:20 - Value alignment 51:08 - Andrew Saxe question 53:16 - Explainable AI 58:55 - Generalization 1:01:09 - What has AI revealed about us? 1:09:38 - Neuro for AI 1:20:30 - Concluding remarks

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review