Next

Introduction to Model Evaluation in Classification

  • Classification model evaluation measures how accurately a machine learning model predicts categorical outcomes using different performance metrics.
  • Why Evaluation is Important

    1. Measure Accuracy: Check how often the model predicts correctly.
    2. Detect Problems: Identify overfitting, underfitting, or bias.
    3. Compare Models: Choose the best model among multiple algorithms.
    4. Guide Improvements: Adjust features, hyperparameters, or algorithms.

    Difference Between Regression & Classification Evaluation

    Aspect

    Regression

    Classification

    Output Type

    Continuous (e.g., house price)

    Categorical (e.g., spam/not spam)

    Metrics

    MAE, MSE, RMSE, R²

    Accuracy, Precision, Recall, F1-score, Confusion Matrix

    Goal

    Minimize prediction error

    Maximize correct classification

    Error Concept

    Difference between predicted & actual values

    Misclassification of categories


    Prediction vs Probability

    • Many classifiers can provide probabilities for each class.

    • Prediction: Final class label assigned based on probability threshold.

      • Example: Threshold = 0.5 → if P(Spam) > 0.5 → Spam, else Not Spam

    • Probability: Gives likelihood of belonging to each class.

      • Useful for ROC curve, AUC, or risk-based decisions

    Example:

    Email

    P(Spam)

    Predicted Label

    Email1

    0.8

    Spam

    Email2

    0.4

    Not Spam

Next