Please login or sign up to post and edit reviews.
Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358
Publisher |
Sam Charrington
Media Type |
audio
Categories Via RSS |
Tech News
Technology
Publication Date |
Mar 18, 2020
Episode Duration |
00:27:33
Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review