This episode currently has no reviews.
Submit ReviewDeep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”.
While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.
But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist?
Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review