In this post, we will focus on interpretability to assess what the ACL tear detector we trained in the previous article actually learnt.
To do this, we'll explore a popular interpretability technique called Class Activation Map, applied when using convolutional neural networks that have a special architecture. By using this method, we'll highlight discriminative areas the network focus on before making a prediction when confronted with an image thus explaining the decision process and building trust.
CAM is also a generic method that can be applied to a variety of computer vision projects. So if you're looking for a way to make your CNNs interpretable you should read this tutorial and adapt the source code.