This episode currently has no reviews.
Submit ReviewMost researchers agree we’ll eventually reach a point where our AI systems begin to exceed human performance at virtually every economically valuable task, including the ability to generalize from what they’ve learned to take on new tasks that they haven’t seen before. These artificial general intelligences (AGIs) would in all likelihood have transformative effects on our economies, our societies and even our species.
No one knows what these effects will be, or when AGI systems will be developed that can bring them about. But that doesn’t mean these things aren’t worth predicting or estimating. The more we know about the amount of time we have to develop robust solutions to important AI ethics, safety and policy problems, the more clearly we can think about what problems should be receiving our time and attention today.
That’s the thesis that motivates a lot of work on AI forecasting: the attempt to predict key milestones in AI development, on the path to AGI and super-human artificial intelligence. It’s still early days for this space, but it’s received attention from an increasing number of the AI safety and AI capabilities researchers. One of those researchers is Owain Evans, whose work at Oxford University’s Future of Humanity Institute is focused on techniques for learning about human beliefs, preferences and values from observing human behavior or interacting with humans. Owain joined me for this episode of the podcast to talk about AI forecasting, the problem of inferring human values, and the ecosystem of research organizations that support this type of research.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review