Udemy

Supervised Machine Learning in Python

Enroll Now
  • 236 Students
  • Updated 6/2021
  • Certificate Available
4.4
(29 Ratings)
CTgoodjobs selects quality courses to enhance professionals' competitiveness. By purchasing courses through links on our site, we may receive an affiliate commission.

Course Information

Registration period
Year-round Recruitment
Course Level
Study Mode
Duration
10 Hour(s) 59 Minute(s)
Language
English
Taught by
Gianluca Malato
Certificate
  • Available
  • *The delivery and distribution of the certificate are subject to the policies and arrangements of the course provider.
Rating
4.4
(29 Ratings)
3 views

Course Overview

Supervised Machine Learning in Python

A practical course about supervised machine learning using Python programming language

In this practical course, we are going to focus on supervised machine learning and how to apply it in Python programming language.

Supervised machine learning is a branch of artificial intelligence whose goal is to create predictive models starting from a dataset. With the proper optimization of the models, it is possible to create mathematical representations of our data in order to extract the information that is hidden inside our database and use it for making inferences and predictions.

A very powerful use of supervised machine learning is the calculation of feature importance, which makes us better understand the information behind data and allows us to reduce the dimensionality of our problem considering only the relevant information, discarding all the useless variables. A common approach for calculating feature importance is the SHAP technique.

Finally, the proper optimization of a model is possible using some hyperparameter tuning techniques that make use of cross-validation.

With this course, you are going to learn:

  1. What supervised machine learning is

  2. What overfitting and underfitting are and how to avoid them

  3. The difference between regression and classification models

  4. Linear models

    1. Linear regression

    2. Lasso regression

    3. Ridge regression

    4. Elastic Net regression

    5. Logistic regression

  5. Decision trees

  6. Naive Bayes

  7. K-nearest neighbors

  8. Support Vector Machines

    1. Linear SVM

    2. Non-linear SVM

  9. Feedforward neural networks

  10. Ensemble models

    1. Bias-variance tradeoff

    2. Bagging and Random Forest

    3. Boosting and Gradient Boosting

    4. Voting

    5. Stacking

  11. Performance metrics

    1. Regression

      1. Root Mean Squared Error

      2. Mean Absolute Error

      3. Mean Absolute Percentage Error

    2. Classification

      1. Confusion matrix

      2. Accuracy and balanced accuracy

      3. Precision

      4. Recall

      5. ROC Curve and the area under it

      6. Multi-class metrics

  12. Feature importance

    1. How to calculate feature importance according to a model

    2. SHAP technique for calculating feature importance according to every model

    3. Recursive Feature Elimination for dimensionality reduction

  13. Hyperparameter tuning

    1. k-fold cross-validation

    2. Grid search

    3. Random search

All the lessons of this course start with a brief introduction and end with a practical example in Python programming language and its powerful scikit-learn library. The environment that will be used is Jupyter, which is a standard in the data science industry. All the Jupyter notebooks are downloadable.

Course Content

  • 20 section(s)
  • 79 lecture(s)
  • Section 1 Introduction to supervised machine learning
  • Section 2 The tools used in this course
  • Section 3 Linear models
  • Section 4 Decision trees
  • Section 5 K-nearest neighbors
  • Section 6 Naive Bayes
  • Section 7 Support Vector Machines
  • Section 8 Neural Networks
  • Section 9 Introduction to ensemble models
  • Section 10 Ensemble models: bagging
  • Section 11 Ensemble models: boosting
  • Section 12 Ensemble models: voting
  • Section 13 Ensemble models: stacking
  • Section 14 Performance evaluation
  • Section 15 Cross-Validation and hyperparameter tuning
  • Section 16 Feature importance and model interpretation
  • Section 17 Recursive Feature Elimination
  • Section 18 Practical examples in Python
  • Section 19 Persisting our model
  • Section 20 Practical approaches

What You’ll Learn

  • Regression and classification models
  • Linear models
  • Decision trees
  • Naive Bayes
  • k-nearest neighbors
  • Support Vector Machines
  • Neural networks
  • Random Forest
  • Gradient Boosting
  • XGBoost
  • Voting
  • Stacking
  • Performance metrics (RMSE, MAPE, Accuracy, Precision, ROC Curve...)
  • Feature importance
  • SHAP
  • Recursive Feature Elimination
  • Hyperparameter tuning
  • Cross-validation

Reviews

  • N
    Nurul Najwa Binti Md Yusof
    4.0

    Good. I am from south east asia. I really wish there is a transcript with proper english. I'm having a really hard time to understand because of the pronunciation/accent barrier.

  • S
    Sathyanarayana M
    5.0

    Excellent Course Content and Teaching

  • P
    Pernell Hodges
    4.0

    Concepts were clearly explained.

  • D
    Daniele Rocchi
    5.0

    Great course! Optimal trade-off between theoretical explanations and hands-on sessions

Start FollowingSee all

We use cookies to enhance your experience on our website. Please read and confirm your agreement to our Privacy Policy and Terms and Conditions before continue to browse our website.

Read and Agreed