Please login or sign up to post and edit reviews.
#039 - Lena Voita - NLP
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Jan 23, 2021
Episode Duration |
01:58:21

ena Voita is a Ph.D. student at the University of Edinburgh and University of Amsterdam. Previously, She was a research scientist at Yandex Research and worked closely with the Yandex Translate team. She still teaches NLP at the Yandex School of Data Analysis. She has created an exciting new NLP course on her website lena-voita.github.io which you folks need to check out! She has one of the most well presented blogs we have ever seen, where she discusses her research in an easily digestable manner. Lena has been investigating many fascinating topics in machine learning and NLP. Today we are going to talk about three of her papers and corresponding blog articles;

Source and Target Contributions to NMT Predictions -- Where she talks about the influential dichotomy between the source and the prefix of neural translation models.

https://arxiv.org/pdf/2010.10907.pdf

voita.github.io/posts/source_target_contributions_to_nmt.html">https://lena-voita.github.io/posts/source_target_contributions_to_nmt.html

Information-Theoretic Probing with MDL -- Where Lena proposes a technique of evaluating a model using the minimum description length or Kolmogorov complexity of labels given representations rather than something basic like accuracy

https://arxiv.org/pdf/2003.12298.pdf

voita.github.io/posts/mdl_probes.html">https://lena-voita.github.io/posts/mdl_probes.html

Evolution of Representations in the Transformer - Lena investigates the evolution of representations of individual tokens in Transformers -- trained with different training objectives (MT, LM, MLM) 

https://arxiv.org/abs/1909.01380

voita.github.io/posts/emnlp19_evolution.html">https://lena-voita.github.io/posts/emnlp19_evolution.html

Panel Dr. Tim Scarfe, Yannic Kilcher, Sayak Paul

00:00:00 Kenneth Stanley / Greatness can not be planned house keeping

00:21:09 Kilcher intro

00:28:54 Hello Lena

00:29:21 Tim - Lenas NMT paper

00:35:26 Tim - Minimum Description Length / Probe paper

00:40:12 Tim - Evolution of representations

00:46:40 Lenas NLP course

00:49:18 The peppermint tea situation 

00:49:28 Main Show Kick Off 

00:50:22 Hallucination vs exposure bias 

00:53:04 Lenas focus on explaining the models not SOTA chasing

00:56:34 Probes paper and NLP intepretability

01:02:18 Why standard probing doesnt work

01:12:12 Evolutions of representations paper

01:23:53 BERTScore  and BERT Rediscovers the Classical NLP Pipeline paper

01:25:10 Is the shifting encoding context because of BERT bidirectionality

01:26:43 Objective defines which information we lose on input

01:27:59 How influential is the dataset?

01:29:42 Where is the community going wrong?

01:31:55 Thoughts on GOFAI/Understanding in NLP?

01:36:38 Lena's NLP course 

01:47:40 How to foster better learning / understanding

01:52:17 Lena's toolset and languages

01:54:12 Mathematics is all you need

01:56:03 Programming languages

voita.github.io/">https://lena-voita.github.io/

https://www.linkedin.com/in/elena-voita/

https://scholar.google.com/citations?user=EcN9o7kAAAAJ&hl=ja

https://twitter.com/lena_voita

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review