Please login or sign up to post and edit reviews.
#92 - SARA HOOKER - Fairness, Interpretability, Language Models
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Dec 23, 2022
Episode Duration |
00:51:31

Support us! https://www.patreon.com/mlst

Sara Hooker is an exceptionally talented and accomplished leader and research scientist in the field of machine learning. She is the founder of Cohere For AI, a non-profit research lab that seeks to solve complex machine learning problems. She is passionate about creating more points of entry into machine learning research and has dedicated her efforts to understanding how progress in this field can be translated into reliable and accessible machine learning in the real-world.

Sara is also the co-founder of the Trustworthy ML Initiative, a forum and seminar series related to Trustworthy ML. She is on the advisory board of Patterns and is an active member of the MLC research group, which has a focus on making participation in machine learning research more accessible.

Before starting Cohere For AI, Sara worked as a research scientist at Google Brain. She has written several influential research papers, including "The Hardware Lottery", "The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation", "Moving Beyond “Algorithmic Bias is a Data Problem”" and "Characterizing and Mitigating Bias in Compact Models". 

In addition to her research work, Sara is also the founder of the local Bay Area non-profit Delta Analytics, which works with non-profits and communities all over the world to build technical capacity and empower others to use data. She regularly gives tutorials on machine learning fundamentals, interpretability, model compression and deep neural networks and is dedicated to collaborating with independent researchers around the world.

Sara Hooker is famous for writing a paper introducing the concept of the 'hardware lottery', in which the success of a research idea is determined not by its inherent superiority, but by its compatibility with available software and hardware. She argued that choices about software and hardware have had a substantial impact in deciding the outcomes of early computer science history, and that with the increasing heterogeneity of the hardware landscape, gains from advances in computing may become increasingly disparate. Sara proposed that an interim goal should be to create better feedback mechanisms for researchers to understand how their algorithms interact with the hardware they use. She suggested that domain-specific languages, auto-tuning of algorithmic parameters, and better profiling tools may help to alleviate this issue, as well as provide researchers with more informed opinions about how hardware and software should progress. Ultimately, Sara encouraged researchers to be mindful of the implications of the hardware lottery, as it could mean that progress on some research directions is further obstructed. If you want to learn more about that paper, watch our previous interview with Sara.

YT version: https://youtu.be/7oJui4eSCoY

MLST Discord: https://discord.gg/aNPkGUQtc5

TOC:

[00:00:00] Intro

[00:02:53] Interpretability / Fairness

[00:35:29] LLMs

Find Sara:

https://www.sarahooker.me/

https://twitter.com/sarahookr

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review