Machine Learning: Balanced Bagging Classifier

Rahul S
2 min readSep 6


The Balanced Bagging Classifier is an ensemble technique designed to address the issue of imbalanced data in machine learning.

It combines the principles of Bagging and random under-sampling to balance class distribution.


  1. Like traditional Bagging, Balanced Bagging creates an ensemble of classifiers by training multiple base classifiers on different subsets of the training data.
  2. In addition it employs random under-sampling. For each subset, it reduces the number of majority class samples to match the minority class. This helps levels the class distribution.
  3. Base classifiers classifiers learn from the balanced subsets, reducing the bias towards the majority class. And the ensemble combines predictions from all base classifiers, often by majority voting in binary classification problems.


Balanced Bagging Classifier is robust, less prone to overfitting, and versatile, as it can work with various base classifiers. However, it requires a sufficient number of minority class samples to be effective, and its success depends on the specific problem and data characteristics.

In contrast to other techniques such as oversampling, undersampling, and specialized algorithms like Isolation Forest, Balanced Bagging Classifier offers a strong combination of resampling and ensemble learning.


Here’s a Python code example that uses the Balanced Bagging Classifier for anomaly detection on the Credit Card Fraud Detection dataset:

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report, confusion_matrix
from imblearn.ensemble import BalancedBaggingClassifier

# Load the Credit Card Fraud Detection dataset
data = pd.read_csv('creditcard.csv')

# Split the data into features (X) and the target variable (y)
X = data.drop(['Class'], axis=1)
y = data['Class']

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Create the Balanced Bagging Classifier with Decision Tree as the base estimator
base_estimator = DecisionTreeClassifier()
bbc = BalancedBaggingClassifier(base_estimator=base_estimator, sampling_strategy='auto', replacement=False, random_state=42)

# Fit the model on the training data, y_train)

# Make predictions on the test data
y_pred = bbc.predict(X_test)

# Evaluate the performance of the model
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))



Rahul S

LLM, NLP, Statistics, MLOps | Senior AI Consultant | IIT Roorkee | Connect: []