Data Engineering and MLOps for Neural Search with Fernando Rejon Barrera and Jakub Zavrel
Podcast |
MLOps Live
Publisher |
neptune.ai
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Jul 06, 2022
Episode Duration |
00:50:51
Today, we’re joined by Fernando Rejon, Senior Infrastructure Engineer at Zeta Alpha Vector, and Jakub Zavrel, Founder and CEO of Zeta Alpha Vector. In addition, they discuss MLOps for neural search applications data engineering, and how this innovation is pushing the bounds of search engines. In this episode, they explore how they use modern deep learning techniques to build an AI research navigator at Zeta Alpha. They engage in an in-depth discussion based on the challenges with setting up MLOps systems for neural search applications, how to evaluate the quality of embedding-based retrieval, progress and numerous pertinent criteria, contrasting the trade-offs of using in neural (information retrieval) search, and the trade-off with using it in practice and theory to standard information retrieval strategies. Additionally, they put into perspective the most important components you would need to build a POC neural search application. examine neural search models in both the retrieval and ranking phases from the perspective of scalability and predictability. They also outline conditions under which state-of-the-art results can be obtained. They also discuss the enormous work necessary to build and deploy neural search applications, which necessitates the use of greater processing resources, such as GPUs rather than CPUs, to get desirable output.
Today, we’re joined by Fernando Rejon, Senior Infrastructure Engineer at Zeta Alpha Vector, and Jakub Zavrel, Founder and CEO of Zeta Alpha Vector. In addition, they discuss MLOps for neural search applications data engineering, and how this innovation is pushing the bounds of search engines. In this episode, they explore how they use modern deep learning techniques to build an AI research navigator at Zeta Alpha. They engage in an in-depth discussion based on the challenges with setting up MLOps systems for neural search applications, how to evaluate the quality of embedding-based retrieval, progress and numerous pertinent criteria, contrasting the trade-offs of using in neural (information retrieval) search, and the trade-off with using it in practice and theory to standard information retrieval strategies. Additionally, they put into perspective the most important components you would need to build a POC neural search application. examine neural search models in both the retrieval and ranking phases from the perspective of scalability and predictability. They also outline conditions under which state-of-the-art results can be obtained. They also discuss the enormous work necessary to build and deploy neural search applications, which necessitates the use of greater processing resources, such as GPUs rather than CPUs, to get desirable output.
Subscribe to our YouTube channel to watch this episode!
Learn more about Fernando and Jakub :
If you enjoyed this episode, then please either:
Register for the new live event
To learn more visit Neptune.ai
Previous guests include: Andy McMahon of NatWest Group, Jacopo Tagliabue of Coveo, Adam Sroka of Origami, Amber Roberts of Arize AI, Michal Tadeusiak of deepsense.ai, Danny Leybzon of WhyLabs, Kyle Morris of Banana ML, Federico Bianchi of Università Bocconi, Mateusz Opala of Brainly, Kuba Cieslik of tuul.ai, Adam Becker of Telepath.io and Fernando Rejon & Jakub Zavrel of Zeta Alpha Vector. Check out our three most downloaded episodes:
MLOps Live is handcrafted by our friends over at: fame.so

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review