Please login or sign up to post and edit reviews.
#110 Explainable AI Methods for Structured Data
Publisher |
Felipe Flores
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
May 15, 2020
Episode Duration |
00:37:06

During this special episode, Felipe gives a presentation on explainable AI methods for structured data. First, Felipe talks about opening the black box. Algorithms can be both sexist and racist, even at massive companies like Google and Amazon. Removing bias in AI is a difficult problem. However, there are ways to overcome it. Where does the bias come from? The dirty secret is that the data is biased. The algorithm doesn’t decide to be biased, it learns to be biased from the data. In reality, AI puts a mirror on society. We have inherent sexism and racism in our society. AI is a tool that will help us eradicate these underlying issues in society. No one should be attacking the people that made the algorithms.

The data is a representation of the world. We use the explainable methods to interpret what is happening in the algorithms. Explainable methods include explainable algorithms and unexplainable algorithms. When we come across an unexplainable algorithm, we can hit them with a framework and try to make them more explainable. Then, Felipe explains decision trees using the Titanic. Start with a list of all the people who boarded the ship, then separate them by gender. Next, you can use your clear rules to find which passengers survived. The model will give you a good summary of all the data depending on the rules. 

Felipe would come across people who said predictable algorithms need to be 99% accurate, or they are garbage. However, if you are predicting how a person will behave, the accuracy will be lower because no one can predict how someone will act. Then, Felipe explains LIME: Local Interpretable Model-Agnostic Explanations. Regardless of the approach, you can use LIME to understand the predictions of an individual person. Stay tuned as Felipe explains the random forest.

Enjoy the show!

We speak about:

[02:10] About Felipe 

[04:00] Opening the black box 

[07:20] Where does the bias come from? 

[11:20] Making more transparent algorithms  

[17:00] About decision trees 

[19:45] Using interpretable models 

[22:20] About LIME: Local Interpretable Model-Agnostic Explanations

[30:10] How to use a random forest

Resources:

#70 Making Black Box Models Explainable With Christoph Molnar – Interpretable Machine Learning Researcher

Quotes:

“The data represents the way that the world works.”

“With the rise of AI, we can choose how we want the world to be.”

“Sometimes, we have algorithms that are just 52% accurate.”

Thank you to our sponsors:

Fyrebox - Make Your Own Quiz!

We are RUBIX. - one of Australia’s leading pure data consulting companies delivering project outcomes for some of the world’s leading brands.

And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show!

--- Send in a voice message: https://podcasters.spotify.com/pod/show/datafuturology/message

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review