This episode currently has no reviews.
Submit ReviewRandy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more.
A few take-home points:
Timestamps: 0:00 - Intro 3:54 - Skip Intro 6:20 - Being in awe 18:57 - How current AI can inform neuro 21:56 - Anna Schapiro question - how current neuro can inform AI. 29:20 - Learned vs. innate cognition 33:43 - LEABRA 38:33 - Developing Leabra 40:30 - Macroscale 42:33 - Thalamus as microscale 43:22 - Thalamocortical circuitry 47:25 - Deep predictive learning 56:18 - Deep predictive learning vs. backrop 1:01:56 - 10 Hz learning cycle 1:04:58 - Better theory vs. more data 1:08:59 - Leabra vs. Spaun 1:13:59 - Biological realism 1:21:54 - Bottom-up inspiration 1:27:26 - Biggest mistake in Leabra 1:32:14 - AI consciousness 1:34:45 - How would Randy begin again?
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review