This podcast currently has no reviews.
Submit ReviewThis podcast currently has no reviews.
Submit ReviewThomas Parr and his collaborators wrote a book titled "Active Inference: The Free Energy Principle in Mind, Brain and Behavior" which introduces Active Inference from both a high-level conceptual perspective and a low-level mechanistic, mathematical perspective.
Active inference, developed by the legendary neuroscientist Prof. Karl Friston - is a unifying mathematical framework which frames living systems as agents which minimize surprise and free energy in order to resist entropy and persist over time. It unifies various perspectives from physics, biology, statistics, and psychology - and allows us to explore deep questions about agency, biology, causality, modelling, and consciousness.
Buy Active Inference: The Free Energy Principle in Mind, Brain, and Behavior
YT version: https://youtu.be/lbb-Si5wa_o
Please support us on Patreon to get access to the private Discord server, bi-weekly calls, early access and ad-free listening.
Chapters should be embedded in the mp3, let me me know if issues
Connor is the CEO of Conjecture and one of the most famous names in the AI alignment movement. This is the "behind the scenes footage" and bonus Patreon interviews from the day of the Beff Jezos debate, including an interview with Daniel Clothiaux. It's a great insight into Connor's philosophy. At the end there is an unreleased additional interview with Beff.
Support MLST:
Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, very early-access + exclusive content and lots more.
Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA
If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail
Topics:
Externalized cognition and the role of society and culture in human intelligence
The potential for AI systems to develop agency and autonomy
The future of AGI as a complex mixture of various components
The concept of agency and its relationship to power
The importance of coherence in AI systems
The balance between coherence and variance in exploring potential upsides
The role of dynamic, competent, and incorruptible institutions in handling risks and developing technology
Concerns about AI widening the gap between the haves and have-nots
The concept of equal access to opportunity and maintaining dynamism in the system
Leahy's perspective on life as a process that "rides entropy"
The importance of distinguishing between epistemological, decision-theoretic, and aesthetic aspects of morality (inc ref to Hume's Guillotine)
The concept of continuous agency and the idea that the first AGI will be a messy admixture of various components
The potential for AI systems to become more physically embedded in the future
The challenges of aligning AI systems and the societal impacts of AI technologies like ChatGPT and Bing
The importance of humility in the face of complexity when considering the future of AI and its societal implications
Disclaimer: this video is not an endorsement of e/acc or AGI agential existential risk from us - the hosts of MLST consider both of these views to be quite extreme. We seek diverse views on the channel.
00:00:00 Intro
00:00:56 Connor's Philosophy
00:03:53 Office Skit
00:05:08 Connor on e/acc and Beff
00:07:28 Intro to Daniel's Philosophy
00:08:35 Connor on Entropy, Life, and Morality
00:19:10 Connor on London
00:20:21 Connor Office Interview
00:20:46 Friston Patreon Preview
00:21:48 Why Are We So Dumb?
00:23:52 The Voice of the People, the Voice of God / Populism
00:26:35 Mimetics
00:30:03 Governance
00:33:19 Agency
00:40:25 Daniel Interview - Externalised Cognition, Bing GPT, AGI
00:56:29 Beff + Connor Bonus Patreons Interview
Professor Chris Bishop is a Technical Fellow and Director at Microsoft Research AI4Science, in Cambridge. He is also Honorary Professor of Computer Science at the University of Edinburgh, and a Fellow of Darwin College, Cambridge. In 2004, he was elected Fellow of the Royal Academy of Engineering, in 2007 he was elected Fellow of the Royal Society of Edinburgh, and in 2017 he was elected Fellow of the Royal Society. Chris was a founding member of the UK AI Council, and in 2019 he was appointed to the Prime Minister’s Council for Science and Technology.
At Microsoft Research, Chris oversees a global portfolio of industrial research and development, with a strong focus on machine learning and the natural sciences.
Chris obtained a BA in Physics from Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory.
Chris's contributions to the field of machine learning have been truly remarkable. He has authored (what is arguably) the original textbook in the field - 'Pattern Recognition and Machine Learning' (PRML) which has served as an essential reference for countless students and researchers around the world, and that was his second textbook after his highly acclaimed first textbook Neural Networks for Pattern Recognition.
Recently, Chris has co-authored a new book with his son, Hugh, titled 'Deep Learning: Foundations and Concepts.' This book aims to provide a comprehensive understanding of the key ideas and techniques underpinning the rapidly evolving field of deep learning. It covers both the foundational concepts and the latest advances, making it an invaluable resource for newcomers and experienced practitioners alike.
Buy Chris' textbook here:
More about Prof. Chris Bishop:
https://en.wikipedia.org/wiki/Christopher_Bishop
https://www.microsoft.com/en-us/research/people/cmbishop/
Support MLST:
Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more.
Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA
If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail
TOC:
00:00:00 - Intro to Chris
00:06:54 - Changing Landscape of AI
00:08:16 - Symbolism
00:09:32 - PRML
00:11:02 - Bayesian Approach
00:14:49 - Are NNs One Model or Many, Special vs General
00:20:04 - Can Language Models Be Creative
00:22:35 - Sparks of AGI
00:25:52 - Creativity Gap in LLMs
00:35:40 - New Deep Learning Book
00:39:01 - Favourite Chapters
00:44:11 - Probability Theory
00:45:42 - AI4Science
00:48:31 - Inductive Priors
00:58:52 - Drug Discovery
01:05:19 - Foundational Bias Models
01:07:46 - How Fundamental Is Our Physics Knowledge?
01:12:05 - Transformers
01:12:59 - Why Does Deep Learning Work?
01:16:59 - Inscrutability of NNs
01:18:01 - Example of Simulator
01:21:09 - Control
Dr. Philip Ball is a freelance science writer. He just wrote a book called "How Life Works", discussing the how the science of Biology has advanced in the last 20 years. We focus on the concept of Agency in particular.
He trained as a chemist at the University of Oxford, and as a physicist at the University of Bristol. He worked previously at Nature for over 20 years, first as an editor for physical sciences and then as a consultant editor. His writings on science for the popular press have covered topical issues ranging from cosmology to the future of molecular biology.
YT: https://www.youtube.com/watch?v=n6nxUiqiz9I
Transcript link on YT description
Philip is the author of many popular books on science, including H2O: A Biography of Water, Bright Earth: The Invention of Colour, The Music Instinct and Curiosity: How Science Became Interested in Everything. His book Critical Mass won the 2005 Aventis Prize for Science Books, while Serving the Reich was shortlisted for the Royal Society Winton Science Book Prize in 2014.
This is one of Tim's personal favourite MLST shows, so we have designated it a special edition. Enjoy!
Buy Philip's book "How Life Works" here: https://amzn.to/3vSmNqp
Support MLST: Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more. https://patreon.com/mlst Donate: https://www.paypal.com/donate/?hosted... If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail
Dr. Paul Lessard and his collaborators have written a paper on "Categorical Deep Learning and Algebraic Theory of Architectures". They aim to make neural networks more interpretable, composable and amenable to formal reasoning. The key is mathematical abstraction, as exemplified by category theory - using monads to develop a more principled, algebraic approach to structuring neural networks.
We also discussed the limitations of current neural network architectures in terms of their ability to generalise and reason in a human-like way. In particular, the inability of neural networks to do unbounded computation equivalent to a Turing machine. Paul expressed optimism that this is not a fundamental limitation, but an artefact of current architectures and training procedures.
The power of abstraction - allowing us to focus on the essential structure while ignoring extraneous details. This can make certain problems more tractable to reason about. Paul sees category theory as providing a powerful "Lego set" for productively thinking about many practical problems.
Towards the end, Paul gave an accessible introduction to some core concepts in category theory like categories, morphisms, functors, monads etc. We explained how these abstract constructs can capture essential patterns that arise across different domains of mathematics.
Paul is optimistic about the potential of category theory and related mathematical abstractions to put AI and neural networks on a more robust conceptual foundation to enable interpretability and reasoning. However, significant theoretical and engineering challenges remain in realising this vision.
Please support us on Patreon. We are entirely funded from Patreon donations right now.
If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail
Links:
Categorical Deep Learning: An Algebraic Theory of Architectures
Bruno Gavranović, Paul Lessard, Andrew Dudzik,
Tamara von Glehn, João G. M. Araújo, Petar Veličković
Paper: https://categoricaldeeplearning.com/
Symbolica:
Dr. Paul Lessard (Principal Scientist - Symbolica)
https://www.linkedin.com/in/paul-roy-lessard/
Interviewer: Dr. Tim Scarfe
TOC:
00:00:00 - Intro
00:05:07 - What is the category paper all about
00:07:19 - Composition
00:10:42 - Abstract Algebra
00:23:01 - DSLs for machine learning
00:24:10 - Inscrutibility
00:29:04 - Limitations with current NNs
00:30:41 - Generative code / NNs don't recurse
00:34:34 - NNs are not Turing machines (special edition)
00:53:09 - Abstraction
00:55:11 - Category theory objects
00:58:06 - Cat theory vs number theory
00:59:43 - Data and Code are one in the same
01:08:05 - Syntax and semantics
01:14:32 - Category DL elevator pitch
01:17:05 - Abstraction again
01:20:25 - Lego set for the universe
01:23:04 - Reasoning
01:28:05 - Category theory 101
01:37:42 - Monads
01:45:59 - Where to learn more cat theory
Dr. Minqi Jiang and Dr. Marc Rigter explain an innovative new method to make the intelligence of agents more general-purpose by training them to learn many worlds before their usual goal-directed training, which we call "reinforcement learning". Their new paper is called "Reward-free curricula for training robust world models" https://arxiv.org/pdf/2306.09205.pdf https://twitter.com/MinqiJiang https://twitter.com/MarcRigter Interviewer: Dr. Tim Scarfe Please support us on Patreon, Tim is now doing MLST full-time and taking a massive financial hit. If you love MLST and want this to continue, please show your support! In return you get access to shows very early and private discord and networking. https://patreon.com/mlst We are also looking for show sponsors, please get in touch if interested mlstreettalk at gmail. MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778
Nick Chater is Professor of Behavioural Science at Warwick Business School, who works on rationality and language using a range of theoretical and experimental approaches. We discuss his books The Mind is Flat, and the Language Game.
Please support me on Patreon (this is now my main job!) - https://patreon.com/mlst - Access the private Discord, networking, and early access to content.
MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778
https://twitter.com/MLStreetTalk
Buy The Language Game:
Buy The Mind is Flat:
YT version: https://youtu.be/5cBS6COzLN4
See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network". The idea is that you follow interests, NOT people. It's a social network without the popularity contest. We discuss the phgilosophy and technology behind the venture in great detail. The main ideas of which came from Kenneth's famous book "Why greatness cannot be planned".
See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network".The idea is that you follow interests, NOT people. It's a social network without the popularity contest.
YT version: https://www.youtube.com/watch?v=pWIrXN-yy8g
Chapters should be baked into the MP3 file now
MLST public Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778 Please support our work on Patreon - get access to interviews months early, private Patreon, networking, exclusive content and regular calls with Tim and Keith. https://patreon.com/mlst Get Maven here: https://www.heymaven.com/ Kenneth: https://twitter.com/kenneth0stanley https://www.kenstanley.net/home Host - Tim Scarfe: https://www.linkedin.com/in/ecsquizor/ https://www.mlst.ai/ Original MLST show with Kenneth: https://www.youtube.com/watch?v=lhYGXYeMq_E
Tim explains the book more here:
Brandon Rohrer who obtained his Ph.D from MIT is driven by understanding algorithms ALL the way down to their nuts and bolts, so he can make them accessible to everyone by first explaining them in the way HE himself would have wanted to learn!
Please support us on Patreon for loads of exclusive content and private Discord:
https://patreon.com/mlst (public discord)
https://twitter.com/MLStreetTalk
Brandon Rohrer is a seasoned data science leader and educator with a rich background in creating robust, efficient machine learning algorithms and tools. With a Ph.D. in Mechanical Engineering from MIT, his expertise encompasses a broad spectrum of AI applications — from computer vision and natural language processing to reinforcement learning and robotics. Brandon's career has seen him in Principle-level roles at Microsoft and Facebook. An educator at heart, he also shares his knowledge through detailed tutorials, courses, and his forthcoming book, "How to Train Your Robot."
YT version: https://www.youtube.com/watch?v=4Ps7ahonRCY
Brandon's links:
https://www.youtube.com/channel/UCsBKTrp45lTfHa_p49I2AEQ
https://www.linkedin.com/in/brohrer/
How transformers work:
https://e2eml.school/transformers
Brandon's End-to-End Machine Learning school courses, posts, and tutorials
Free course:
to-end-machine-learning.teachable.com/p/complete-course-library-full-end-to-end-machine-learning-catalog">https://end-to-end-machine-learning.teachable.com/p/complete-course-library-full-end-to-end-machine-learning-catalog
Blog: https://e2eml.school/blog.html
Ziptie: Learning Useful Features [Brandon Rohrer]
https://www.brandonrohrer.com/ziptie
TOC should be baked into the MP3 file now
00:00:00 - Intro to Brandon
00:00:36 - RLHF
00:01:09 - Limitations of transformers
00:07:23 - Agency - we are all GPTs
00:09:07 - BPE / representation bias
00:12:00 - LLM true believers
00:16:42 - Brandon's style of teaching
00:19:50 - ML vs real world = Robotics
00:29:59 - Reward shaping
00:37:08 - No true Scotsman - when do we accept capabilities as real
00:38:50 - Externalism
00:43:03 - Building flexible robots
00:45:37 - Is reward enough
00:54:30 - Optimization curse
00:58:15 - Collective intelligence
01:01:51 - Intelligence + creativity
01:13:35 - ChatGPT + Creativity
01:25:19 - Transformers Tutorial
The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values. As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path.
Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week! https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk
Post-interview with Beff and Connor: https://www.patreon.com/posts/97905213
Pre-interview with Connor and his colleague Dan Clothiaux: https://www.patreon.com/posts/connor-leahy-and-97631416
Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions.
Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism.
Beff Jezos (Guillaume Verdon): https://twitter.com/BasedBeffJezos https://twitter.com/GillVerd Connor Leahy: https://twitter.com/npcollapse
YT: https://www.youtube.com/watch?v=0zxi0xSBOaQ
TOC:
00:00:00 - Intro
00:03:05 - Society library reference
00:03:35 - Debate starts
00:05:08 - Should any tech be banned?
00:20:39 - Leaded Gasoline
00:28:57 - False vacuum collapse method?
00:34:56 - What if there are dangerous aliens?
00:36:56 - Risk tolerances
00:39:26 - Optimizing for growth vs value
00:52:38 - Is vs ought
01:02:29 - AI discussion
01:07:38 - War / global competition
01:11:02 - Open source F16 designs
01:20:37 - Offense vs defense
01:28:49 - Morality / value
01:43:34 - What would Conor do
01:50:36 - Institutions/regulation
02:26:41 - Competition vs. Regulation Dilemma
02:32:50 - Existential Risks and Future Planning
02:41:46 - Conclusion and Reflection
Note from Tim: I baked the chapter metadata into the mp3 file this time, does that help the chapters show up in your app? Let me know. Also I accidentally exported a few minutes of dead audio at the end of the file - sorry about that just skip on when the episode finishes.
This podcast could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review