Confusion Matrix in ML

A Confusion Matrix is a performance measurement tool used for evaluating the accuracy of a classification model. It is especially useful for understanding how well our model performs on a multi-class or binary classification problem.

The matrix compares the predicted class labels with the actual class labels, helping us understand the types of errors our model is making. It is represented in the form of a square matrix, where:

Structure of a Confusion Matrix For a binary classification problem, the confusion matrix looks like this:

Predicted PositivePredicted Negative
Actual PositiveTrue Positive (TP)False Negative (FN)
Actual NegativeFalse Positive (FP)True Negative (TN)

Example of a Confusion Matrix Suppose we have a model predicting whether an email is spam (positive) or not spam (negative). Here's a sample confusion matrix:

Predicted SpamPredicted Not Spam
Actual Spam50 (TP)10 (FN)
Actual Not Spam5 (FP)100 (TN)

From this matrix: