This episode currently has no reviews.
Submit ReviewOne of the consequences of living in a world where we have every kind of data we could possible want at our fingertips, is that we have far more data available to us than we could possibly review. Wondering which university program you should enter? You could visit any one of a hundred thousand websites that each offer helpful insights, or take a look at ten thousand different program options on hundreds of different universities’ websites. The only snag is that, by the time you finish that review, you probably could have graduated.
Recommender systems allow us to take controlled sips from the information fire hose that’s pointed our way every day of the week, by highlighting a small number of particularly relevant or valuable items from a vast catalog. And while they’re incredibly valuable pieces of technology, they also have some serious ethical failure modes — many of which arise because companies tend to build recommenders to reflect user feedback, without thinking of the broader implications these systems have for society and human civilization.
Those implications are significant, and growing fast. Recommender algorithms deployed by Twitter and Google regularly shape public opinion on the key moral issues of our time — sometimes intentionally, and sometimes even by accident. So rather than allowing society to be reshaped in the image of these powerful algorithms, perhaps it’s time we asked some big questions about the kind of world we want to live in, and worked backward to figure out what our answers would imply for the way we evaluate recommendation engines.
That’s exactly why I wanted to speak with Silvia Milano, my guest for this episode of the podcast. Silvia is an expert of the ethics of recommender systems, and a researcher at Oxford’s Future of Humanity Institute and at the Oxford Internet Institute, where she’s been involved in work aimed at better understanding the hidden impact of recommendation algorithms, and what can be done to mitigate their more negative effects. Our conversation took us led us to consider complex questions, including the definition of identity, the human right to self-determination, and the interaction of governments with technology companies.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review