Explainable AI
Learn how to interpret the decision-making process of deep neural networks.
About this course
The inner workings of many deep learning systems are complicated, if not impossible, for the human mind to comprehend. Explainable Artificial Intelligence (XAI) aims to provide AI experts with transparency into these systems. In this course, you’ll describe what Explainable AI is, how to use it, and the data structures behind XAI’s preferred algorithms. Next, you’ll explore the interpretability problem and today’s state-of-the-art solutions to it. You’ll identify XAI regulations, define the “right to explanation”, and illustrate real-world examples where this has been applicable. You’ll move on to recognize both the Counterfactual and Axiomatic methods, distinguishing their pros and cons. You’ll investigate the intelligible models method, along with the concepts of monotonicity and rationalization. Finally, you’ll learn how to use a Generative Adversarial Network.
Learning objectives
discover the key concepts covered in this course
recognize what explainable ai is and its significance
define the interpretability problem and its importance
Show all
There are no reviews yet.