BI 141 Carina Curto: From Structure to Dynamics
Podcast |
Brain Inspired
Publisher |
Paul Middlebrooks
Media Type |
audio
Categories Via RSS |
Education
Natural Sciences
Science
Technology
Publication Date |
Jul 12, 2022
Episode Duration |
01:31:40

Check out my free video series about what's missing in AI and Neuroscience

Support the show to get full episodes and join the Discord community.

Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

0:00 - Intro 4:25 - Background: Physics and math to study brains 20:45 - Beautiful and ugly models 35:40 - Topology 43:14 - Topology in hippocampal navigation 56:04 - Topology vs. dynamical systems theory 59:10 - Combinatorial linear threshold networks 1:25:26 - How much more math do we need to invent?

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition. Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis 0:00 - Intro 4:25 - Background: Physics and math to study brains 20:45 - Beautiful and ugly models 35:40 - Topology 43:14 - Topology in hippocampal navigation 56:04 - Topology vs. dynamical systems theory 59:10 - Combinatorial linear threshold networks 1:25:26 - How much more math do we need to invent?

Check out my free video series about what's missing in AI and Neuroscience

Support the show to get full episodes and join the Discord community.

Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

0:00 - Intro 4:25 - Background: Physics and math to study brains 20:45 - Beautiful and ugly models 35:40 - Topology 43:14 - Topology in hippocampal navigation 56:04 - Topology vs. dynamical systems theory 59:10 - Combinatorial linear threshold networks 1:25:26 - How much more math do we need to invent?

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review