Introduction to Model Evaluation in Classification
- Classification model evaluation measures how accurately a machine learning model predicts categorical outcomes using different performance metrics.
Why Evaluation is Important
- Measure Accuracy: Check how often the model predicts correctly.
- Detect Problems: Identify overfitting, underfitting, or bias.
- Compare Models: Choose the best model among multiple algorithms.
- Guide Improvements: Adjust features, hyperparameters, or algorithms.
Difference Between Regression & Classification Evaluation
Prediction vs Probability
Many classifiers can provide probabilities for each class.
Prediction: Final class label assigned based on probability threshold.
Example: Threshold = 0.5 → if P(Spam) > 0.5 → Spam, else Not Spam
Probability: Gives likelihood of belonging to each class.
Useful for ROC curve, AUC, or risk-based decisions
Example: