This episode currently has no reviews.
Submit ReviewThe fields of AI bias and AI fairness are still very young. And just like most young technical fields, they’re dominated by theoretical discussions: researchers argue over what words like “privacy” and “fairness” mean, but don’t do much in the way of applying these definitions to real-world problems.
Slowly but surely, this is all changing though, and government oversight has had a big role to play in that process. Laws like GDPR — passed by the European Union in 2016 —are starting to impose concrete requirements on companies that want to use consumer data, or build AI systems with it. There are pros and cons to legislating machine learning, but one thing’s for sure: there’s no looking back. At this point, it’s clear that government-endorsed definitions of “bias” and “fairness” in AI systems are going to be applied to companies (and therefore to consumers), whether they’re well-developed and thoughtful or not.
Keeping up with the philosophy of AI is a full-time job for most, but actually applying that philosophy to real-world corporate data is its own additional challenge. My guest for this episode of the podcast is doing just that: Nicolai Baldin is a former Cambridge machine learning researcher, and now the founder and CEO of Synthesized, a startup that specializes in helping companies apply privacy, AI fairness and bias best practices to their data. Nicolai is one of relatively few people working on concrete problems in these areas, and has a unique perspective on the space as a result.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review