This episode currently has no reviews.
Submit ReviewJae Lee is the cofounder and CEO of Twelve Labs, where they are building video understanding infrastructure to help developers build programs that can see, hear, and understand the world. He was previously the Lead Data Scientist at the Ministry of National Defense in South Korea. He has a bachelors in computer science from UC Berkeley. In this episode, we cover a range of topics including: - What is multimodal video understanding - State of play in multimodal video - The founding of Twelve Labs - The launch of Pegasus-1 - Four core principles: Efficient Long-form Video Processing, Multimodal Understanding, Video-native Embeddings, Deep Alignment between Video and Language Embeddings - Differences between multimodal vs traditional video analysis - In what ways can malicious actors misuse this technology? - The future of multimodal video understanding Jae's favorite books: - Deep Learning (Authors: Ian Goodfellow, Yoshua Bengio, Aaron Courville)- The Giving Tree (Author: Shel Silverstein)--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review