Please login or sign up to post and edit reviews.
Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621 - Publication Date |
- Mar 20, 2023
- Episode Duration |
- 00:51:27
Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction.
This episode could use a review!
This episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review