Master Explainable AI: Interpreting Image Classifier Decisions
Learn about popular Explainable AI algorithms such as integrated gradient, LIME, class activation maps, counterfactual explanations, etc.
Explainable AI is a set of tools and frameworks that helps you understand and interpret the internal logic behind the predictions made by a deep learning network. With this, you can generate insights into the behavior and working of the model to mitigate issues around it in the development phase.
In this course, you will be introduced to popular Explainable AI algorithms such as smooth gradient, integrated gradient, LIME, class activation maps, counterfactual explanations, feature attributions, etc., for image classification networks such as MobileNet-V2 trained on large-scale datasets like ImageNet-1K.
By the end of this course, you will understand the need for Explainable AI and be able to design and implement popular explanation algorithms like saliency maps, class activation maps, counterfactual explanations, etc. You will be able to evaluate and quantify the quality of the neural network explanations via several interpretability metrics.
There are no reviews yet.