Please login or sign up to post and edit reviews.
Future of Computing with John Hennessy Holiday Repeat
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Nov 26, 2019
Episode Duration |
00:56:43

Originally published June 7, 2018 Moore’s Law states that the number of transistors in a dense integrated circuit doubles about every two years. Moore’s Law is less like a “law” and more like an observation or a prediction. Moore’s Law is ending. We can no longer fit an increasing amount of transistors in the same

The post Future of Computing with John Hennessy Holiday Repeat appeared first on Software Engineering Daily.

Originally published June 7, 2018 Moore’s Law states that the number of transistors in a dense integrated circuit doubles about every two years. Moore’s Law is less like a “law” and more like an observation or a prediction. Moore’s Law is ending. We can no longer fit an increasing amount of transistors in the same

Originally published June 7, 2018

Moore’s Law states that the number of transistors in a dense integrated circuit doubles about every two years. Moore’s Law is less like a “law” and more like an observation or a prediction.

Moore’s Law is ending. We can no longer fit an increasing amount of transistors in the same amount of space with a highly predictable rate. Dennard scaling is also coming to an end. Dennard scaling is the observation that as transistors get smaller, the power density stays constant.

These changes in hardware trends have downstream effects for software engineers. Most importantly–power consumption becomes much more important.

As a software engineer, how does power consumption affect you? It means that inefficient software will either run more slowly or cost more money relative to our expectations in the past. Whereas software engineers writing code 15 years ago could comfortably project that their code would get significantly cheaper to run over time due to hardware advances, the story is more complicated today.

Why is Moore’s Law ending? And what kinds of predictable advances in technology can we still expect?

John Hennessy is the chairman of Alphabet. In 2017, he won a Turing award (along with David Patterson) for his work on the RISC (Reduced Instruction Set Compiler) architecture. From 2000 to 2016, he was the president of Stanford University.

John joins the show to explore the future of computing. While we may not have the predictable benefits of Moore’s Law and Dennard scaling, we now have machine learning. It is hard to plot the advances of machine learning on any one chart (as we explored in a recent episode with OpenAI). But we can say empirically that machine learning is working quite well in production.

If machine learning offers us such strong advances in computing, how can we change our hardware design process to make machine learning more efficient?

As machine learning training workloads eat up more resources in a data center, engineers are developing domain specific chips which are optimized for those machine learning workloads. The Tensor Processing Unit (TPU) from Google is one such example. John mentioned that chips could become even more specialized within the domain of machine learning. You could imagine a chip that is specifically designed for a LSTM machine learning model.

There are other domains where we could see specialized chips–drones, self-driving cars, wearable computers. In this episode, John describes his perspective on the future of computing, and offers some framework for how engineers can adapt to that future.

Show Notes

 

The post Future of Computing with John Hennessy Holiday Repeat appeared first on Software Engineering Daily.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review