Please login or sign up to post and edit reviews.
Model Interpretation (and Trust Issues) - Publication Date |
- Apr 25, 2016
- Episode Duration |
- 00:16:57
Machine learning algorithms can be black boxes--inputs go in, outputs come out, and what happens in the middle is anybody's guess. But understanding how a model arrives at an answer is critical for interpreting the model, and for knowing if it's doing something reasonable (one could even say... trustworthy). We'll talk about a new algorithm called LIME that seeks to make any model more understandable and interpretable.
Relevant Links:
http://arxiv.org/abs/1602.04938
https://github.com/marcotcr/lime/tree/master/limeThis episode could use a review!
This episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review