This episode currently has no reviews.
Submit ReviewYannic Kilcher is PhD candidate at ETH Zurich researching deep learning, structured learning, and optimization for large and high-dimensional data. He produces videos on his enormously popular Youtube channel breaking down recent ML papers.
Follow Yannic on Twitter: https://twitter.com/ykilcher
Check out Yannic's excellent Youtube channel: https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew
Listen to the ML Street Talk podcast: https://podcasts.apple.com/us/podcast/machine-learning-street-talk/id1510472996
Every Thursday I send out the most useful things I’ve learned, curated specifically for the busy machine learning engineer. Sign up here: http://bitly.com/mle-newsletter
Follow Charlie on Twitter: https://twitter.com/CharlieYouAI
Subscribe to ML Engineered: https://mlengineered.com/listen
Comments? Questions? Submit them here: http://bit.ly/mle-survey
Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/
Timestamps:
02:40 Yannic Kilcher
07:05 Research for his PhD thesis and plans for the future
12:05 How he produces videos for his enormously popular Youtube channel
21:50 Yannic's research process: choosing what to read and how he reads for understanding
27:30 Why ML conference peer review is broken and what a better solution looks like
45:20 On the field's obsession with state of the art
48:30 Is deep learning is the future of AI? Is attention all you need?
56:10 Is AI overhyped right now?
01:01:00 Community Questions
01:13:30 Yannic flips the script and asks me about what I do
01:25:30 Rapid fire questions
Links:
Yannic's amazing Youtube Channel
Yannic's Community Discord Channel
On the Measure of Intelligence: arXiv paper and Yannic's video series
How I Read a Paper: Facebook's DETR (Video Tutorial)
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
Yannic Kilcher is PhD candidate at ETH Zurich researching deep learning, structured learning, and optimization for large and high-dimensional data. He produces videos on his enormously popular Youtube channel breaking down recent ML papers.
Follow Yannic on Twitter: https://twitter.com/ykilcher
Check out Yannic's excellent Youtube channel: https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew
Listen to the ML Street Talk podcast: https://podcasts.apple.com/us/podcast/machine-learning-street-talk/id1510472996
Every Thursday I send out the most useful things I’ve learned, curated specifically for the busy machine learning engineer. Sign up here: http://bitly.com/mle-newsletter
Follow Charlie on Twitter: https://twitter.com/CharlieYouAI
Subscribe to ML Engineered: https://mlengineered.com/listen
Comments? Questions? Submit them here: http://bit.ly/mle-survey
Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/
Timestamps:
02:40 Yannic Kilcher
07:05 Research for his PhD thesis and plans for the future
12:05 How he produces videos for his enormously popular Youtube channel
21:50 Yannic's research process: choosing what to read and how he reads for understanding
27:30 Why ML conference peer review is broken and what a better solution looks like
45:20 On the field's obsession with state of the art
48:30 Is deep learning is the future of AI? Is attention all you need?
56:10 Is AI overhyped right now?
01:01:00 Community Questions
01:13:30 Yannic flips the script and asks me about what I do
01:25:30 Rapid fire questions
Links:
Yannic's amazing Youtube Channel
Yannic's Community Discord Channel
On the Measure of Intelligence: arXiv paper and Yannic's video series
How I Read a Paper: Facebook's DETR (Video Tutorial)
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review