Please login or sign up to post and edit reviews.
Why Responsible AI is Needed in Explainable AI Systems with Christoph Lütge of TUM
Publisher |
David Yakobovitch
Media Type |
audio
Categories Via RSS |
Courses
Education
News
Tech News
Technology
Publication Date |
Mar 29, 2020
Episode Duration |
00:32:16

[Audio] 

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS

Christoph Lütge studied business informatics and philosophy in Braunschweig, Paris, Göttingen and Berlin. He was a visiting scholar at the University of Pittsburgh (1997) and research fellow at the University of California, San Diego (1998). After taking his PhD in philosophy in 1999, Lütge held a position as assistant professor at the Chair for Philosophy and Economics of the University of Munich (LMU) from 1999 to 2007, where he also took his habilitation in 2005. He was acting professor at Witten/Herdecke University (2007-2008) and at Braunschweig University of Technology (2008-2010). 

Since 2010, he holds the Peter Löscher Chair in Business Ethics at the Technical University of Munich. In 2019, Lütge was appointed director of the new TUM Institute for Ethics in Artificial Intelligence. He has held visiting positions in Venice (2003), Kyoto (2015), Taipei (2015), at Harvard (2019) and the University of Stockholm (2020). In 2020, he was appointed Distinguished Visiting Professor of the University of Tokyo. His main areas of interest are ethics of AI, ethics of digitization, business ethics, foundations of ethics as well as philosophy of the social sciences and economics. 

His major publications include "Business Ethics: An Economically Informed Perspective" (Oxford University Press, 2021, with Matthias Uhl), "An Introduction to Ethics in Robotics and AI“ (Springer, 2021, with coauthors) and "The Ethics of Competition” (Elgar, 2019; Japanese edition with Keio University Press, 2020).

He has been a member of the Ethics Commission on Automated and Connected Driving of the German Federal Ministry of Transport and Digital Infrastructure (2016-17), as well as of the European AI Ethics initiative AI4People (2018-). He has also done consulting work for the Singapore Economic Development Board and the Canadian Transport Commission.

Episode Links:  

Christoph Lütge’s LinkedIn: https://www.linkedin.com/in/christophluetge/ 

Christoph Lütge’s Twitter: @chluetge 

Christoph Lütge’s Website: https://www.gov.tum.de/en/wirtschaftsethik/start/ 

Podcast Details: 

Podcast website: https://www.humainpodcast.com/ 

Apple Podcasts:  https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009 

Spotify:  https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS 

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9 

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag 

YouTube Clips:  https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos 

Support and Social Media:  

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators   

– Twitter:  https://twitter.com/dyakobovitch 

– Instagram: https://www.instagram.com/humainpodcast/ 

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/  

– Facebook: https://www.facebook.com/HumainPodcast/ 

– HumAIn Website Articles: https://www.humainpodcast.com/blog/ 

Outline: 

Here’s the timestamps for the episode: 

(00:00) – Introduction

(02:25) –  On the Future Forum we developed the idea of forming a kind of global network of centers for AI ethics. And at the end of this forum, we launched a concrete project, the global AI Consortium, which we are now taking forward in order to form a kind of global alliance of centers working in this field.

(04:06) – It's not just an academic thing. It's not just a traditional research Institute where you do research behind closed doors, basically intimate. You have to work with both industries, with civil society and with politics, and that's the only way to take these issues forward. 

(06:54) – More of these systems are more visible to the public, and that's why there's also this discussion about AI and the ethical as well as governance aspects of it.  Certainly the trend is now, and has been already for years, obviously, the machine learning and deep learning aspect of AI, which some of the more conservative countries still refuse to call real AI. So for a long time, the idea has been that there will be something more robot-like systems that are out there in the world and doing certain things. But  this is the major trend. And of course, the implementation into special vehicles, and probably also in the field of health. I would say these are the most important trends for the near future.

(09:33) – AI systems can both speed up a lot of processes, as well as create entirely new ones, or let's say connect data. They will provide a lot of new input for doctors. And so we are, and will be more and more, at a point where we can say, it's not responsible anymore not to use AI.

(12:10) – We have these different levels of autonomous, striving automated, highly automated driving and fully automated driving. So what we are witnessing now is a progression on these levels. We need to get beyond that level where it's actually where the company is liable during the time that the car was in control, but not the driver.

(15:33) – We need to have robust software which must be able to drive on the difficult, maybe not most extreme conditions, that's if we want to drive under any conditions that will be difficult. And of course, that car must be able to deal with, let's say, rain, with hale, with snow, at least light snow, maybe. And that can pose a number of difficulties, also different ones around the globe.

(17:05) – We presented our first guidelines for ethics of AI in late 2018 in the European parliament. And we came up with these five ethical principles for AI. So, which are beneficence-maleficence, justice-autonomy. And while these four are quite standard for ethics, the fifth one is quite interesting: the explainability criteria. Then we presented another paper on AI governance issues just recently last November, this was about how companies and States can interact on deriving rules and governance rules for these systems.

(20:48) – There are a few people who have the expertise in ethics actually. I'm one of the few ones in there and it will be quite interesting to see how this process works out, because, ultimately, we will need to develop international standards for these AVs.

(23:03) – Ethics is quite a fuzzy term. It has lots of connotations and, for some people, it's about personal morality and that's not really what we mean. We are aiming at standards or guidelines, rules which are not always legal ones, which might be so. So we found it also better to use the term responsible AI. Not just the typical research academic conference, but one where we plan to interact with other stakeholders from industry, from civil society, from politics as well.

(24:37) – We invite the abstracts on many areas of AI and ethics in a general sense to visit our webpage to find a lot of potential topics, whether it will be AI in the healthcare sector, AI and the STGs, AI policy, AI and diversity and education, and many others.

(25:47) – Engineering curriculum should be enriched with elements from humanities and social sciences, not least of which it will be ethics. But now with a focus on AI, it becomes clearer that working on AI will not be enough to just look at it from a purely technical point of view. It needs to generate the necessary trust. Otherwise people would just not use these systems. And this is something that engineers should be familiar with, engineers and computer scientists, and people from technology.

(28:23) – One of the key challenges will be how we manage to some extent, standardize explainability. Every step within the system must be transparent and it must be clear, you must be able to track it down. Of course, there's no way to do that, if you are familiar with the technology. So we need to find some kind of middle way. And there is this research field of explainable AI in computer science, and the challenge will be to implement systems. 

Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Bias in AI is becoming a concern as algorithms cause unfairness in many areas including hiring, loan applications and autonomous vehicles. Everybody expects AI to be accountable and calls for developing standards and governance systems to create balance. The idea of black boxes demonstrates the flaws of using AI since this technology cannot be scrutinized. Humans want an accountable technology and with AI being a black box, this means responsibility to control how algorithms work for better outcomes. AI can also cause destruction and make secret decisions, which cause negative implications on people’s lives and translates to using responsible AI systems. By integrating explainable AI into their AI models, businesses make accurate decisions, map patterns and optimize operations. Listen in, as I discuss Why Responsible AI is needed in Explainable AI Systems In this episode: Prof. Christoph Lutge, Director of TUM Institute for Ethics and AI (Germany) This episode is brought to you by For The People. You can grab your copy of For the People on Amazon today, or visit SIMONCHADWICK.US to learn more about Simon. Learn more about your ad-choices at www.humainpodcast.com/advertise You can support the HumAIn podcast and receive subscriber-only content at http://humainpodcast.com/newsletter Advertising Inquiries: https://redcircle.com/brands Privacy & Opt-Out: https://redcircle.com/privacy

[Audio] 

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS

Christoph Lütge studied business informatics and philosophy in Braunschweig, Paris, Göttingen and Berlin. He was a visiting scholar at the University of Pittsburgh (1997) and research fellow at the University of California, San Diego (1998). After taking his PhD in philosophy in 1999, Lütge held a position as assistant professor at the Chair for Philosophy and Economics of the University of Munich (LMU) from 1999 to 2007, where he also took his habilitation in 2005. He was acting professor at Witten/Herdecke University (2007-2008) and at Braunschweig University of Technology (2008-2010). 

Since 2010, he holds the Peter Löscher Chair in Business Ethics at the Technical University of Munich. In 2019, Lütge was appointed director of the new TUM Institute for Ethics in Artificial Intelligence. He has held visiting positions in Venice (2003), Kyoto (2015), Taipei (2015), at Harvard (2019) and the University of Stockholm (2020). In 2020, he was appointed Distinguished Visiting Professor of the University of Tokyo. His main areas of interest are ethics of AI, ethics of digitization, business ethics, foundations of ethics as well as philosophy of the social sciences and economics. 

His major publications include "Business Ethics: An Economically Informed Perspective" (Oxford University Press, 2021, with Matthias Uhl), "An Introduction to Ethics in Robotics and AI“ (Springer, 2021, with coauthors) and "The Ethics of Competition” (Elgar, 2019; Japanese edition with Keio University Press, 2020).

He has been a member of the Ethics Commission on Automated and Connected Driving of the German Federal Ministry of Transport and Digital Infrastructure (2016-17), as well as of the European AI Ethics initiative AI4People (2018-). He has also done consulting work for the Singapore Economic Development Board and the Canadian Transport Commission.

Episode Links:  

Christoph Lütge’s LinkedIn: https://www.linkedin.com/in/christophluetge/ 

Christoph Lütge’s Twitter: @chluetge 

Christoph Lütge’s Website: https://www.gov.tum.de/en/wirtschaftsethik/start/ 

Podcast Details: 

Podcast website: https://www.humainpodcast.com/ 

Apple Podcasts:  https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009 

Spotify:  https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS 

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9 

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag 

YouTube Clips:  https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos 

Support and Social Media:  

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators   

– Twitter:  https://twitter.com/dyakobovitch 

– Instagram: https://www.instagram.com/humainpodcast/ 

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/  

– Facebook: https://www.facebook.com/HumainPodcast/ 

– HumAIn Website Articles: https://www.humainpodcast.com/blog/ 

Outline: 

Here’s the timestamps for the episode: 

(00:00) – Introduction

(02:25) –  On the Future Forum we developed the idea of forming a kind of global network of centers for AI ethics. And at the end of this forum, we launched a concrete project, the global AI Consortium, which we are now taking forward in order to form a kind of global alliance of centers working in this field.

(04:06) – It's not just an academic thing. It's not just a traditional research Institute where you do research behind closed doors, basically intimate. You have to work with both industries, with civil society and with politics, and that's the only way to take these issues forward. 

(06:54) – More of these systems are more visible to the public, and that's why there's also this discussion about AI and the ethical as well as governance aspects of it.  Certainly the trend is now, and has been already for years, obviously, the machine learning and deep learning aspect of AI, which some of the more conservative countries still refuse to call real AI. So for a long time, the idea has been that there will be something more robot-like systems that are out there in the world and doing certain things. But  this is the major trend. And of course, the implementation into special vehicles, and probably also in the field of health. I would say these are the most important trends for the near future.

(09:33) – AI systems can both speed up a lot of processes, as well as create entirely new ones, or let's say connect data. They will provide a lot of new input for doctors. And so we are, and will be more and more, at a point where we can say, it's not responsible anymore not to use AI.

(12:10) – We have these different levels of autonomous, striving automated, highly automated driving and fully automated driving. So what we are witnessing now is a progression on these levels. We need to get beyond that level where it's actually where the company is liable during the time that the car was in control, but not the driver.

(15:33) – We need to have robust software which must be able to drive on the difficult, maybe not most extreme conditions, that's if we want to drive under any conditions that will be difficult. And of course, that car must be able to deal with, let's say, rain, with hale, with snow, at least light snow, maybe. And that can pose a number of difficulties, also different ones around the globe.

(17:05) – We presented our first guidelines for ethics of AI in late 2018 in the European parliament. And we came up with these five ethical principles for AI. So, which are beneficence-maleficence, justice-autonomy. And while these four are quite standard for ethics, the fifth one is quite interesting: the explainability criteria. Then we presented another paper on AI governance issues just recently last November, this was about how companies and States can interact on deriving rules and governance rules for these systems.

(20:48) – There are a few people who have the expertise in ethics actually. I'm one of the few ones in there and it will be quite interesting to see how this process works out, because, ultimately, we will need to develop international standards for these AVs.

(23:03) – Ethics is quite a fuzzy term. It has lots of connotations and, for some people, it's about personal morality and that's not really what we mean. We are aiming at standards or guidelines, rules which are not always legal ones, which might be so. So we found it also better to use the term responsible AI. Not just the typical research academic conference, but one where we plan to interact with other stakeholders from industry, from civil society, from politics as well.

(24:37) – We invite the abstracts on many areas of AI and ethics in a general sense to visit our webpage to find a lot of potential topics, whether it will be AI in the healthcare sector, AI and the STGs, AI policy, AI and diversity and education, and many others.

(25:47) – Engineering curriculum should be enriched with elements from humanities and social sciences, not least of which it will be ethics. But now with a focus on AI, it becomes clearer that working on AI will not be enough to just look at it from a purely technical point of view. It needs to generate the necessary trust. Otherwise people would just not use these systems. And this is something that engineers should be familiar with, engineers and computer scientists, and people from technology.

(28:23) – One of the key challenges will be how we manage to some extent, standardize explainability. Every step within the system must be transparent and it must be clear, you must be able to track it down. Of course, there's no way to do that, if you are familiar with the technology. So we need to find some kind of middle way. And there is this research field of explainable AI in computer science, and the challenge will be to implement systems. 

Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review