Machine Learning: AdaBoost

Rahul S
2 min readSep 5

AdaBoost (Adaptive Boosting) is a binary classification ensemble learning algorithm that focuses on data points misclassified by the current ensemble.

It iteratively trains weak classifiers, assigns weights to data points, and combines them into a strong classifier. Its key steps:

  1. Initialization: Set equal weights for all training data points.
  2. Iterative Training: For each iteration, train a weak classifier, often a decision stump, on weighted data. It emphasizes misclassified data from the previous round.
  3. Classifier Weight Calculation: Calculate the weighted error of the weak classifier and compute its weight in the ensemble. Higher weight for better performance.
  4. Weight Update: Adjust data point weights based on the correctness of classification by the current weak classifier. Increase weights for misclassified points, decrease for correct ones.
  5. Ensemble Creation: Combine weak classifiers into an ensemble, each with its weight.
  6. Normalization of Weights: Ensure data point weights form a probability distribution.
  7. Final Classification: Compute the final prediction as the sign of the weighted sum of predictions from weak classifiers.
  8. Repeat Iterations: Continue iterations until a preset limit or desired accuracy is reached.

AdaBoost’s strengths include improved accuracy, versatility in weak classifier choice, reduced overfitting, and simplicity in parameter tuning. However, it’s sensitive to noisy data, potential overfitting with too many weak classifiers, computational complexity, and potential bias toward complex weak classifiers. It also lacks direct probability estimates.

In practice, AdaBoost adapts to challenging data points, leading to strong classifiers, but it should be used judiciously based on the nature of the data and problem.

Improving Data Quality: The Foundation for Accurate and Reliable Models

Machine Learning: Confusion matrix in classification problems

Machine Learning: Data Drift and Concept Drift

Machine Learning- Data Leakage

Machine Learning — Cost Function, An Introduction

Machine Learning: Cross Entropy and Cross-Entropy Loss

Machine Learning: Interpretation of Loss Function with Cross-Entropy Loss

Introduction to Gaussian Mixture Models (GMM) with Expectation-Maximization (EM)

DBSCAN: Intution, Advantages, and Points to Remember

Machine Learning: Regularization for Overfitting

Rahul S

I learn as I write | LLM, NLP, Statistics, ML