Get in Touch

Course Outline

Introduction

This module offers a comprehensive overview of when to apply 'machine learning,' key considerations, and fundamental concepts, including advantages and limitations. Topics include data types (structured, unstructured, static, or streamed), data validity and volume, data-driven versus user-driven analytics, statistical models compared to machine learning models, challenges in unsupervised learning, the bias-variance tradeoff, iterative evaluation methods, cross-validation techniques, and distinctions between supervised, unsupervised, and reinforcement learning.

MAJOR TOPICS

1. Understanding naive Bayes

  • Core concepts of Bayesian methods
  • Probability fundamentals
  • Joint probability
  • Conditional probability via Bayes' theorem
  • The naive Bayes algorithm
  • Naive Bayes classification
  • The Laplace estimator
  • Applying numeric features with naive Bayes

2. Understanding decision trees

  • Divide and conquer strategy
  • The C5.0 decision tree algorithm
  • Selecting the optimal split
  • Pruning the decision tree

3. Understanding neural networks

  • From biological to artificial neurons
  • Activation functions
  • Network topology
  • Number of layers
  • Direction of information flow
  • Number of nodes per layer
  • Training neural networks via backpropagation
  • Deep Learning

4. Understanding Support Vector Machines

  • Classification using hyperplanes
  • Finding the maximum margin
  • Handling linearly separable data
  • Handling non-linearly separable data
  • Utilizing kernels for non-linear spaces

5. Understanding clustering

  • Clustering as a machine learning task
  • The k-means algorithm for clustering
  • Using distance metrics for cluster assignment and updates
  • Selecting the appropriate number of clusters

6. Measuring performance for classification

  • Working with classification prediction data
  • Examining confusion matrices
  • Using confusion matrices to assess performance
  • Beyond accuracy – other performance metrics
  • The kappa statistic
  • Sensitivity and specificity
  • Precision and recall
  • The F-measure
  • Visualizing performance tradeoffs
  • ROC curves
  • Estimating future performance
  • The holdout method
  • Cross-validation
  • Bootstrap sampling

7. Tuning stock models for better performance

  • Using caret for automated parameter tuning
  • Creating a simple tuned model
  • Customizing the tuning process
  • Improving model performance with meta-learning
  • Understanding ensembles
  • Bagging
  • Boosting
  • Random forests
  • Training random forests
  • Evaluating random forest performance

MINOR TOPICS

8. Understanding classification using the nearest neighbors

  • The kNN algorithm
  • Calculating distance
  • Choosing an appropriate k
  • Preparing data for use with kNN
  • Why is the kNN algorithm lazy?

9. Understanding classification rules

  • Separate and conquer approach
  • The One Rule algorithm
  • The RIPPER algorithm
  • Rules derived from decision trees

10. Understanding regression

  • Simple linear regression
  • Ordinary least squares estimation
  • Correlations
  • Multiple linear regression

11. Understanding regression trees and model trees

  • Incorporating regression into trees

12. Understanding association rules

  • The Apriori algorithm for association rule learning
  • Measuring rule interest – support and confidence
  • Building a set of rules with the Apriori principle

Extras

  • Spark/PySpark/MLlib and Multi-armed bandits

Requirements

Proficiency in Python

 21 Hours

Number of participants


Price per participant

Testimonials (7)

Upcoming Courses

Related Categories