Please login or sign up to post and edit reviews.
What makes a machine learning algorithm "superhuman"? - Publication Date |
- Feb 26, 2018
- Episode Duration |
- 00:34:48
A few weeks ago, we podcasted about a neural network that was being touted as "better than doctors" in diagnosing pneumonia from chest x-rays, and how the underlying dataset used to train the algorithm raised some serious questions. We're back again this week with further developments, as the author of the original blog post pointed us toward more developments. All in all, there's a lot more clarity now around how the authors arrived at their original "better than doctors" claim, and a number of adjustments and improvements as the original result was de/re-constructed.
Anyway, there are a few things that are cool about this. First, it's a worthwhile follow-up to a popular recent episode. Second, it goes *inside* an analysis to see what things like imbalanced classes, outliers, and (possible) signal leakage can do to real science. And last, it raises a really interesting question in an age when computers are often claimed to be better than humans: what do those claims really mean?
Relevant links:
https://lukeoakdenrayner.wordpress.com/2018/01/24/chexnet-an-in-depth-review/This episode could use a review!
This episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review