End-to-End Project
- This lesson demonstrates a complete machine learning project covering the full workflow from data preparation to model evaluation.
Problem Statement
Business Problem:
A telecom company wants to predict:
Will a customer leave the company (Churn) or not?
If we predict early:
Company can offer discounts
Improve retention
Reduce revenue loss
Objective:
Build an ANN model that predicts:
0 → No Churn
1 → Churn
Dataset Preparation
Assume dataset contains:
Step 1: Import Libraries
Preparing Customer Churn Dataset for ANN in Python
This Python example demonstrates the data preprocessing steps required to prepare a customer churn dataset for training an Artificial Neural Network (ANN) using TensorFlow Keras. The code includes loading the dataset with Pandas, encoding categorical variables using LabelEncoder, splitting features and target, performing a train-test split, and applying feature scaling with StandardScaler to normalize the data.
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
🔹 Step 2: Load Dataset
df = pd.read_csv("churn_data.csv")
print(df.head())
🔹 Step 3: Handle Categorical Data
le = LabelEncoder()
df['Contract'] = le.fit_transform(df['Contract'])
df['InternetService'] = le.fit_transform(df['InternetService'])
df['Churn'] = le.fit_transform(df['Churn'])
🔹 Step 4: Split Features & Target
X = df.drop('Churn', axis=1)
y = df['Churn']
🔹 Step 5: Train-Test Split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
🔹 Step 6: Feature Scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
Scaling is important for ANN.
Model Building (ANN)
Step 7: Build Model
Customer Churn Prediction using ANN in Python with TensorFlow Keras
This Python example demonstrates how to build, train, and evaluate an Artificial Neural Network (ANN) for customer churn prediction. The model consists of Dense layers with ReLU activation, a Dropout layer for regularization, and a Sigmoid output layer for binary classification. It is compiled with the Adam optimizer and binary cross-entropy loss, trained with a validation split, and evaluated on the test dataset to measure accuracy.
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(X_train.shape[1],)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
🔹 Step 8: Compile Model
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
🔹 Step 9: Train Model
history = model.fit(
X_train, y_train,
epochs=30,
batch_size=32,
validation_split=0.2
)
🔹 Step 10: Evaluate Model
loss, accuracy = model.evaluate(X_test, y_test)
print("Test Accuracy:", accuracy)
Sample Output
Test Accuracy: 0.85
85% accuracy means good churn prediction.
Model Prediction
Making Predictions with a Binary Classification ANN in Python
This Python example demonstrates how to use a trained ANN model to make predictions on new data. The code calls model.predict() on the test dataset and converts the predicted probabilities into binary class labels (True/False) using a threshold of 0.5, suitable for binary classification tasks like customer churn prediction.
predictions = model.predict(X_test)
predictions = (predictions > 0.5)
Save Model
Saving a Trained ANN Model in Python using TensorFlow Keras
This Python example demonstrates how to save a trained Artificial Neural Network (ANN) model using TensorFlow Keras. The model is saved in the ".keras" format, allowing you to reload it later with load_model() for predictions or further training without retraining from scratch.
model.save("churn_model.keras")
Full End-to-End Workflow
Problem Understanding
↓
Data Collection
↓
Data Cleaning & Encoding
↓
Train-Test Split
↓
Feature Scaling
↓
Build ANN
↓
Compile Model
↓
Train Model
↓
Evaluate Model
↓
Save Model