Please login or sign up to post and edit reviews.
101. Ayanna Howard - AI and the trust problem
Publisher |
The TDS team
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Nov 03, 2021
Episode Duration |
00:53:15

Over the last two years, the capabilities of AI systems have exploded. AlphaFold2, MuZero, CLIP, DALLE, GPT-3 and many other models have extended the reach of AI to new problem classes. There’s a lot to be excited about.

But as we’ve seen in other episodes of the podcast, there’s a lot more to getting value from an AI system than jacking up its capabilities. And increasingly, one of these additional missing factors is becoming trust. You can make all the powerful AIs you want, but if no one trusts their output — or if people trust it when they shouldn’t — you can end up doing more harm than good.

That’s why we invited Ayanna Howard on the podcast. Ayanna is a roboticist, entrepreneur and Dean of the College of Engineering at Ohio State University, where she focuses her research on human-machine interactions and the factors that go into building human trust in AI systems. She joined me to talk about her research, its applications in medicine and education, and the future of human-machine trust.

---

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters:

- 0:00 Intro

- 1:30 Ayanna’s background

- 6:10 The interpretability of neural networks

- 12:40 Domain of machine-human interaction

- 17:00 The issue of preference

- 20:50 Gelman/newspaper amnesia

- 26:35 Assessing a person’s persuadability

- 31:40 Doctors and new technology

- 36:00 Responsibility and accountability

- 43:15 The social pressure aspect

- 47:15 Is Ayanna optimistic?

- 53:00 Wrap-up

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review