Blogs/Explainable and Interpretable AI

Explainable and Interpretable AI

peterwashington Nov 01 2021 7 min read 0 views
Bias and Fairness
Explainable2.png

In order to verify that a model is not using biased assumptions to make its predictions, it is crucial to understand why a model is making a particular prediction. The models we have covered so far are all relatively explainable, but some of the methods we will learn later on (like neural networks) are less interpretable.

Let’s think for a bit about how the methods we have learned so far are understandable. This will also serve as a quick review of the methods we have covered so far.

Linear regression learns the parameters of m and b for the equation y = mx + b. If there are multiple input variables, then multiple slope parameters are learned for the equation y = m1x1 + m2x2 + … + mNxN + b. If we learn the equation y = 3x1 - 4x2 + 80x3 + b, then we can reason that whenever the regression model makes a prediction, it is relying on the input value of x3 much more than the other input variables. In particular, it is relying on x3 exactly 20 times more than x2 since the absolute value of the coefficient for x3 (80) is 20 times larger than the absolute value of the coefficient for x2 (4). Because we can tell exactly why linear regression made a particular prediction, we can claim that it is an explainable method.

Logistic regression, a method used for classification instead of regression, learns the same parameters as linear regression, except that sigmoid activation (for binary classification) or softmax activation (for more than 2 output classes) is applied to the output value to get a probability value between 0 and 1 (for binary classification) or a vector of probabilities for each class (for more than 2 output classes). Because the same m and b parameters are learned, logistic regression is just as explainable as linear regression.

K-nearest neighbors is also explainable. Recall from Chapter 2 that k-nearest neighbors classification identifies the k-closest examples when plotting the input variables. For example, using 3-nearest neighbors classification, k-nearest neighbors predicts bird type A for the test data point specified by the question mark below:

K-nearest neighbors is explainable because we can reason simply that the classifier made its decision by finding what the class was for the k nearest data points for each point in the test set.

Decision trees are pretty explainable as well. We can simply look the decision tree itself to view the exact decision path it followed to make any particular decision. For example, with the following decision tree (from Chapter 2), we would know exactly when it would choose to suggest pulling over or speeding up:

Fortunately, we have so far learned about models that are all pretty interpretable (albeit, in different ways). In future chapters, we will revisit this notion of explainable and interpretable AI. The methods we will learn about are more complex, and therefore much more powerful than what we have learned about so far. But this comes at the cost of explainability. We will learn about methods for making these more complex models explainable in these future chapters.