This episode currently has no reviews.
Submit ReviewAI systems have the potential to provide great value. But also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough. You need to know how to build, use, and interact with these systems ethically and responsibly. Additionally you need to understand that Trustworthy AI is a spectrum that addresses various aspects relating to societal, systemic, and technical areas.
AI systems have the potential to provide great value. But also the potential to cause great harm. Knowing how to build or use AI systems is simply not going to be enough. You need to know how to build, use, and interact with these systems ethically and responsibly. Additionally you need to understand that Trustworthy AI is a spectrum that addresses various aspects relating to societal, systemic, and technical areas.
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer go over at a high level the Explainable & Interpretable AI layer. Relying on black box technology can be dangerous. Without understandability, we don’t have trust. To trust these systems, humans want accountability and explanation. We discuss what Explainable & Interpretable AI is and why it's important for AI systems. We also discuss the main elements that need to be addressed in the Explainable & Interpretable AI layer, and discuss what considerations and questions you need to address as you're implementing responsible AI.
Show Notes:
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review