Please login or sign up to post and edit reviews.
#73 - YASAMAN RAZEGHI & Prof. SAMEER SINGH - NLP benchmarks
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Apr 07, 2022
Episode Duration |
00:55:53

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/ESrGqhf5CB

YT version: https://youtu.be/RzGaI7vXrkk

This week we speak with Yasaman Razeghi and Prof. Sameer Singh from UC Urvine. Yasaman recently published a paper called Impact of Pretraining Term Frequencies on Few-Shot Reasoning where she demonstrated comprehensively that large language models only perform well on reasoning tasks because they memorise the dataset. For the first time she showed the accuracy was linearly correlated to the occurance rate in the training corpus, something which OpenAI should have done in the first place! 

We also speak with Sameer who has been a pioneering force in the area of machine learning interpretability for many years now, he created LIME with Marco Riberio and also had his hands all over the famous Checklist paper and many others. 

We also get into the metric obsession in the NLP world and whether metrics are one of the principle reasons why we are failing to make any progress in NLU. 

[00:00:00] Impact of Pretraining Term Frequencies on Few-Shot Reasoning

[00:14:59] Metrics

[00:18:55] Definition of reasoning

[00:25:12] Metrics (again)

[00:28:52] On true believers 

[00:33:04] Sameers work on model explainability / LIME 

[00:36:58] Computational irreducability 

[00:41:07] ML DevOps and Checklist

[00:45:58] Future of ML devops

[00:49:34] Thinking about future

Prof. Sameer Singh

https://sameersingh.org/

Yasaman Razeghi

https://yasamanrazeghi.com/

References;

Impact of Pretraining Term Frequencies on Few-Shot Reasoning [Razeghi et al with Singh]

https://arxiv.org/pdf/2202.07206.pdf

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList [Riberio et al with Singh]

https://arxiv.org/pdf/2005.04118.pdf

“Why Should I Trust You?” Explaining the Predictions of Any Classifier (LIME) [Riberio et al with Singh]

https://arxiv.org/abs/1602.04938

Tim interviewing LIME Creator Marco Ribeiro in 2019

https://www.youtube.com/watch?v=6aUU-Ob4a8I

Tim video on LIME/SHAP on his other channel

https://www.youtube.com/watch?v=jhopjN08lTM

Our interview with Christoph Molar

https://www.youtube.com/watch?v=0LIACHcxpHU

Interpretable Machine Learning book @ChristophMolnar

https://christophm.github.io/interpretable-ml-book/

Machine Teaching: A New Paradigm for Building Machine Learning Systems [Simard]

https://arxiv.org/abs/1707.06742

Whimsical notes on machine teaching

https://whimsical.com/machine-teaching-Ntke9EHHSR25yHnsypHnth

Gopher paper (Deepmind)

https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval

https://arxiv.org/pdf/2112.11446.pdf

EleutherAI

https://www.eleuther.ai/

https://github.com/kingoflolz/mesh-transformer-jax/

https://pile.eleuther.ai/

A Theory of Universal Artificial Intelligence based on Algorithmic Complexity [Hutter]

https://arxiv.org/pdf/cs/0004001.pdf

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review