This episode currently has no reviews.
Submit ReviewSupport us! https://www.patreon.com/mlst
Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teaching Algorithmic Reasoning via In-context Learning’. In this work, Hattie identifies and examines four key stages for successfully teaching algorithmic reasoning to large language models (LLMs): formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through the application of algorithmic prompting, Hattie has achieved remarkable results, with an order of magnitude error reduction on some tasks compared to the best available baselines. This breakthrough demonstrates algorithmic prompting’s viability as an approach for teaching algorithmic reasoning to LLMs, and may have implications for other tasks requiring similar reasoning capabilities.
TOC
[00:00:00] Hattie Zhou
[00:19:49] Markus Rabe [Google Brain]
Hattie's Twitter - https://twitter.com/oh_that_hat
Website - http://hattiezhou.com/
Teaching Algorithmic Reasoning via In-context Learning [Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi]
https://arxiv.org/pdf/2211.09066.pdf
Markus Rabe [Google Brain]:
https://twitter.com/markusnrabe
https://research.google/people/106335/
https://www.linkedin.com/in/markusnrabe
Autoformalization with Large Language Models [Albert Jiang Charles Edgar Staats Christian Szegedy Markus Rabe Mateja Jamnik Wenda Li Yuhuai Tony Wu]
https://research.google/pubs/pub51691/
Discord: https://discord.gg/aNPkGUQtc5
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review