Please login or sign up to post and edit reviews.
BI 097 Omri Barak and David Sussillo: Dynamics and Structure
Podcast |
Brain Inspired
Publisher |
Paul Middlebrooks
Media Type |
audio
Categories Via RSS |
Education
Natural Sciences
Science
Technology
Publication Date |
Feb 08, 2021
Episode Duration |
01:23:57

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 neco.pdf">Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:

  • The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;
  • Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);
  • The difference between classical approaches to modeling brains and the machine learning approach;
  • The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;
  • How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).
  • David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains.

Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss: The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);The difference between classical approaches to modeling brains and the machine learning approach;The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains. Barak LabTwitter: @SussilloDavidThe papers we discuss or mention:Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.Computation Through Neural Population Dynamics.Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.Dynamics of random recurrent networks with correlated low-rank structure.Quality of internal representation shapes learning performance in feedback neural networks.Feigenbaum's universality constant original paper: Feigenbaum, M. J. (1976) "Universality in complex discrete dynamics", Los Alamos Theoretical Division Annual Report 1975-1976TalksUniversality and individuality in neural dynamics across large populations of recurrent networks.World Wide Theoretical Neuroscience Seminar: Omri Barak, January 6, 2021 Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 neco.pdf">Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:

  • The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;
  • Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);
  • The difference between classical approaches to modeling brains and the machine learning approach;
  • The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;
  • How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).
  • David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains.

Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review