Please login or sign up to post and edit reviews.
Forging a Path to Ethical A.I.
Publisher |
The Aspen Institute
Media Type |
audio
Podknife tags |
Ideas
Society & Culture
Publication Date |
Feb 22, 2024
Episode Duration |
00:55:33

It doesn’t look like we’re going to be able to put the generative artificial intelligence genie back in the bottle. But we might still be able to prevent some potential damage. Tools like Bard and ChatGPT are already being used in the workplace, educational settings, health care, scientific research, and all over social media. What kind of guardrails do we need to prevent bad actors from causing the worst imaginable outcomes? And who can put those protections in place and enforce them? A panel of A.I. experts from the 2023 Aspen Ideas Festival shares hopes and fears for this kind of technology, and discusses what can realistically be done by private, public and civil society sectors to keep it in check. Lila Ibrahim, COO of the Google A.I. company DeepMind, joins social science professor Alondra Nelson and IBM’s head of privacy and trust, Christina Montgomery, for a conversation about charting a path to ethical uses of A.I. CNBC tech journalist Deirdre Bosa moderates the conversation and takes audience questions. 

aspenideas.org

It doesn’t look like we’re going to be able to put the generative artificial intelligence genie back in the bottle. But we might still be able to prevent some potential damage. Tools like Bard and ChatGPT are already being used in the workplace, educational settings, health care, scientific research, and all over social media. What kind of guardrails do we need to prevent bad actors from causing the worst imaginable outcomes? And who can put those protections in place and enforce them? A panel of A.I. experts from the 2023 Aspen Ideas Festival shares hopes and fears for this kind of technology, and discusses what can realistically be done by private, public and civil society sectors to keep it in check. Lila Ibrahim, COO of the Google A.I. company DeepMind, joins social science professor Alondra Nelson and IBM’s head of privacy and trust, Christina Montgomery, for a conversation about charting a path to ethical uses of A.I. CNBC tech journalist Deirdre Bosa moderates the conversation and takes audience questions.

It doesn’t look like we’re going to be able to put the generative artificial intelligence genie back in the bottle. But we might still be able to prevent some potential damage. Tools like Bard and ChatGPT are already being used in the workplace, educational settings, health care, scientific research, and all over social media. What kind of guardrails do we need to prevent bad actors from causing the worst imaginable outcomes? And who can put those protections in place and enforce them? A panel of A.I. experts from the 2023 Aspen Ideas Festival shares hopes and fears for this kind of technology, and discusses what can realistically be done by private, public and civil society sectors to keep it in check. Lila Ibrahim, COO of the Google A.I. company DeepMind, joins social science professor Alondra Nelson and IBM’s head of privacy and trust, Christina Montgomery, for a conversation about charting a path to ethical uses of A.I. CNBC tech journalist Deirdre Bosa moderates the conversation and takes audience questions. 

aspenideas.org

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review