This episode currently has no reviews.
Submit ReviewAs AI systems have become more powerful, they’ve been deployed to tackle an increasing number of problems.
Take computer vision. Less than a decade ago, one of the most advanced applications of computer vision algorithms was to classify hand-written digits on mail. And yet today, computer vision is being applied to everything from self-driving cars to facial recognition and cancer diagnostics.
Practically useful AI systems have now firmly moved from “what if?” territory to “what now?” territory. And as more and more of our lives are run by algorithms, an increasing number of researchers from domains outside computer science and engineering are starting to take notice. Most notably among these are philosophers, many of whom are concerned about the ethical implications of outsourcing our decision-making to machines whose reasoning we often can’t understand or even interpret.
One of the most important voices in the world of AI ethics has been that of Dr Annette Zimmermann, a Technology & Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University, and a Lecturer in Philosophy at the University of York. Annette is has focused a lot of her work on exploring the overlap between algorithms, society and governance, and I had the chance to sit down with her to discuss her views on bias in machine learning, algorithmic fairness, and the big picture of AI ethics.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review