Confusion Matricespeterwashington Nov 01 2021 1 min read 0 views
Another useful visual tool for evaluating classifiers is called a confusion matrix. Confusion matrices show, for each category that can be predicted, what the distribution of predicted categories is. Below is an example of a confusion matrix:
Confusion matrices convey a lot of information in a single visualization. The diagonals of the matrix convey correct predictions by a classifier. In general, the darker the diagonals of the matrix are and the lighter the rest of the matrix is, the better the classifier performed. We can see in the top row of the above matrix that category 1 was predicted correctly 8 times; the middle row shows that category 2 was predicted correctly 10 times; the bottom row shows that category 3 was predicted correctly 8 times.
We can also read more fine-grained information from the confusion matrix. For example, we see that when category 1 is not predicted correctly, it tends to be “confused” for category 2 more often than category 3. This is how confusion matrices got their name, and this level of detail is incredibly useful when debugging a classifier or providing details about the types of data the classifier may struggle with.