9.5: Conclusion
Machine learning models should be
highly accurate and easy to explain. This is more applicable to models that
affect the daily lives of human beings. For example, the potential use of a
model for approving or rejecting loans. This might have moral, ethical, and
legal consequences. Hence, it is essential to provide a suitable explanation
for denying a loan.
Similarly, if a computer vision
model is deployed for the home monitoring, we will expect it to do it without
fail. If it is unable to identify intrusion, it might have legal consequences
for the company who licensed the model. The company might be required to
explain why it couldn't detect cases of intrusion. In some cases, the company
might be liable to pay for the damages. In such a situation, the company might
use the model explanation in their defense.
Model explainability is important in
making the model predictions reliable. When the model is predicting correct
values and is aided by proper explanations, it will instill confidence amongst
intended users to use the model. Similarly, when the model is giving a wrong
prediction, it will help fix accountability by identifying features and the
values that contribute most to the prediction.
There are many methods for
explaining the overall model. Similarly, there are many methods for explaining
individual model predictions. We should use multiple methods for model
explanation, before making final conclusions on the model. Similarly, for
explaining a single instance of prediction, we should try different methods and
compare against different methods. If more than one method suggests us the same
conclusion, we should include this in our finding. If there are contradicting
conclusions from different explanation methods, we should choose the
conclusions which are closer to domain knowledge.