SDS 283: Getting The Most Out of Data With Gradient Boosting
Media Type |
audio
Categories Via RSS |
Business
Publication Date |
Jul 31, 2019
Episode Duration |
01:02:25
In this episode of the SuperDataScience Podcast, I chat with one of the key people behind the Python package scikit-learn, Andreas Mueller. You will learn about gradient boosting algorithms, XGBoost, LightGBM and HistGradientBoosting. You will hear Andreas's approach to solving problems, what machine learning algorithms he prefers to apply to a given data science challenge, in which order and why. You will also hear about problems with Kaggle competitions. You will find out the four key questions that Andreas recommends to ask when you have a data challenge in front of you. You will learn about his 95% rule to creating models, and creating success in business enterprises with the help of machine learning. And, finally, you will also learn about the Data Science Institute at Columbia University. If you enjoyed this episode, check out show notes, resources, and more at www.superdatascience.com/283

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review