Please login or sign up to post and edit reviews.
Peter Norvig – Singularity Is in the Eye of the Beholder
Podcast |
Gradient Dissent
Publisher |
Lukas Biewald
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Nov 20, 2020
Episode Duration |
00:47:11
We're thrilled to have Peter Norvig join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research. Peter Norvig is a Director of Research at Google Inc; previously he directed Google's core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000. Prior to his work at Google, Norvig was NASA's chief computer scientist. Peter's website: https://norvig.com/ Topics covered: 0:00 singularity is in the eye of the beholder 0:32 introduction 1:09 project Euler 2:42 advent of code/pytudes 4:55 new sections in the new version of his book 10:32 unreasonable effectiveness of data Paper 15 years later 14:44 what advice would you give to a young researcher? 16:03 computing power in the evolution of deep learning 19:19 what's been surprising in the development of AI? 24:21 from alpha go to human-like intelligence 28:46 What in AI has been surprisingly hard or easy? 32:11 synthetic data and language 35:16 singularity is in the eye of the beholder 38:43 the future of python in ML and why he used it in his book 43:00 underrated topic in ML and bottlenecks in production Visit our podcasts homepage for transcripts and more episodes! https://www.wandb.com/podcast Get our podcast on Apple, Spotify, and Google! Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF Google: https://tiny.cc/GD_Google We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it! Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: https://tiny.cc/wb-salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: https://bit.ly/wb-slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review