Please login or sign up to post and edit reviews.
Nell Watson: How To Teach AI Human Values
Publisher |
David Yakobovitch
Media Type |
audio
Categories Via RSS |
Courses
Education
News
Tech News
Technology
Publication Date |
Nov 17, 2021
Episode Duration |
00:34:44

Nell Watson: How To Teach AI Human Values   

[Audio] 

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS

Nell Watson is an interdisciplinary researcher in emerging technologies such as machine vision and A.I. ethics. Her work primarily focuses on protecting human rights and putting ethics, safety, and the values of the human spirit into technologies such as Artificial Intelligence. Nell serves as Chair & Vice-Chair respectively of the IEEE’s ECPAIS Transparency Experts Focus Group, and P7001 Transparency of Autonomous Systems committee on A.I. Ethics & Safety, engineering credit score-like mechanisms into A.I. to help safeguard algorithmic trust.

She serves as an Executive Consultant on philosophical matters for Apple, as well as serving as Senior Scientific Advisor to The Future Society, and Senior Fellow to The Atlantic Council. She also holds Fellowships with the British Computing Society and Royal Statistical Society, among others. Her public speaking has inspired audiences to work towards a brighter future at venues such as The World Bank, The United Nations General Assembly, and The Royal Society.

Episode Links:  

Nell Watson’s LinkedIn: https://www.linkedin.com/in/nellwatson/ 

Nell Watson’s Twitter: https://twitter.com/NellWatson 

Nell Watson’s Website: https://www.nellwatson.com/ 

Podcast Details: 

Podcast website: https://www.humainpodcast.com 

Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009 

Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS 

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9 

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag 

YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos 

Support and Social Media:  

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators 

– Twitter: https://twitter.com/dyakobovitch 

– Instagram: https://www.instagram.com/humainpodcast/ 

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/ 

– Facebook: https://www.facebook.com/HumainPodcast/ 

– HumAIn Website Articles: https://www.humainpodcast.com/blog/ 

Outline: 

Here’s the timestamps for the episode: 

(2:57)- Even though the science of forensics and police work has changed so much in those last two centuries, principles are great, but it's very important that we create something actionable out of that. We create criteria with defined metrics that we can know whether we are achieving those principles and to what degree.

(3:25)- With that in mind, I’ve been working with teams at the IEEE Standards Association to create standards for transparency, which are a little bit traditional big document upfront very deep working on many different levels for many different use cases and different people for example, investigators or managers of organizations, etcetera.

(9:04)- Transparency is really the foundation of all other aspects of AI and Ethics. We need to understand how an incident occurred, or we need to understand how a system performs a function in order to. I analyze how it might be biased or where there might be some malfunction or what might occur in a certain situation or a certain scenario, or indeed who might be responsible for something having gone through it is really the most basic element of protecting ourselves, protecting our privacy, our autonomy from these kinds of advanced algorithmic systems, there are many different elements that might influence these kinds of systems.

(26:35)- We're really coming to a Sputnik moment and AI. We've gotten used to the idea of talking to our embodied smart speakers and asking them about sports results or what tomorrow's weather is going to be. But they're not truly conversational.

(32:43)- Fundamentally technologies and a humane society is about putting the human first, putting human needs first and adapting systems to serve those needs and to truly and better the human condition to not sacrifice everything for the sake of efficiency to leave a bit of slack and to ensure that the costs to society of a new innovation or the costs to the environment are properly taken into effect.

Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review