Please login or sign up to post and edit reviews.
Behind the DeepSeek Hype, AI is Learning to Reason
Media Type |
audio
Publication Date |
Feb 20, 2025
Episode Duration |
00:31:34

When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.

In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.

These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.

Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIA

Further reading on DeepSeek’s R1 and the market reaction 

exposes-deepseek-ai-training-165025904.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAALLXXvMzQeAgJE2JwHBt3itrl1oddPR9DG4SXtWPVZsWyoS0nwtlxhEwFLlCjWqkqEwjpo0hqtQ23SIB1wuo4NsA_fAktNIZcKQUvLh_2ejZMkiEIsSHEd4t86jIaYCS6wMU8V3PB9YPk8z_B0okRRANNhFFOQh-mE80FUBUcJ_i">Further reading on the debate about the actual cost of DeepSeek’s R1 model  

The study that found training AIs to code also made them better writers 

More information on the AI coding company Cursor 

Further reading on Eric Schmidt’s threshold to “pull the plug” on AI

 Further reading on Move 37RECOMMENDED YUA EPISODES

The Self-Preserving Machine: Why AI Learns to Deceive 

This Moment in AI: How We Got Here and Where We’re Going 

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn 

The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao 

DeepSeek's breakthrough in AI sent markets reeling. But behind the headlines lies a crucial shift: AI that can actually reason and think. As labs race toward self-improving AI, the real question isn't how fast we can go, but how to steer this power for the benefit of us all.

When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.

In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.

These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.

Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIA

Further reading on DeepSeek’s R1 and the market reaction 

exposes-deepseek-ai-training-165025904.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAALLXXvMzQeAgJE2JwHBt3itrl1oddPR9DG4SXtWPVZsWyoS0nwtlxhEwFLlCjWqkqEwjpo0hqtQ23SIB1wuo4NsA_fAktNIZcKQUvLh_2ejZMkiEIsSHEd4t86jIaYCS6wMU8V3PB9YPk8z_B0okRRANNhFFOQh-mE80FUBUcJ_i">Further reading on the debate about the actual cost of DeepSeek’s R1 model  

The study that found training AIs to code also made them better writers 

More information on the AI coding company Cursor 

Further reading on Eric Schmidt’s threshold to “pull the plug” on AI

 Further reading on Move 37RECOMMENDED YUA EPISODES

The Self-Preserving Machine: Why AI Learns to Deceive 

This Moment in AI: How We Got Here and Where We’re Going 

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn 

The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao 

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review