Please login or sign up to post and edit reviews.
32. Bahador Khaleghi - Explainable AI and AI interpretability
Publisher |
The TDS team
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
May 06, 2020
Episode Duration |
00:43:41

If I were to ask you to explain why you’re reading this blog post, you could answer in many different ways.

For example, you could tell me “it’s because I felt like it”, or “because my neurons fired in a specific way that led me to click on the link that was advertised to me”. Or you might go even deeper and relate your answer to the fundamental laws of quantum physics.

The point is, explanations need to be targeted to a certain level of abstraction in order to be effective.

That’s true in life, but it’s also true in machine learning, where explainable AI is getting more and more attention as a way to ensure that models are working properly, in a way that makes sense to us. Understanding explainability and how to leverage it is becoming increasingly important, and that’s why I wanted to speak with Bahador Khaleghi, a data scientist at H20.ai whose technical focus is on explainability and interpretability in machine learning.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review