Machine Learning Explainability
Understand the importance of machine learning explainability. Learn to interpret and explain complex ML models using techniques like SHAP, LIME, and other methods to ensure transparency and trust in AI predictions.
At a Glance
In this Guided Project, we will walk through explainability techniques for various types of machine learning models like linear regression, light gradient boosting machine, CNNs, and pre-trained transformers.
Explainability refers to having an understanding of why a model makes a certain prediction. This typically comes in form of knowing the relationship between a model’s prediction and the input features used to generate said prediction (text, pixels, features, etc.). Linear models like linear regression, and ensemble models like decision trees are known to be easily interpretable. Deep learning models are black boxes, which makes it much harder to understand how those models make predictions. In this Guided Project, we will use SHAP, a common model-agnostic explainability method, to calculate the contributions of each feature to the prediction for various types of models.
A Look at the Project Ahead
After completing this guided project you will be able to:
- Use LinearExplainer to explain linear models like linear regression
- Use TreeExplainer to explain ensemble models like light gradient boosting machine
- Use GradientExplainer to explain CNN models
- Use SHAP Explainer to explain pre-trained transformer models
This course mainly uses Python and JupyterLabs. Although these skills are recommended prerequisites, no prior experience is required as this Guided Project is designed for complete beginners.
Frequently Asked Questions
Your Instructor
Kopal Garg
I am a Data Scientist Intern at IBM, and a Masters student in computer science at the University of Toronto. I am passionate about building AI-based solutions that improve various aspects of human life.
There are no reviews yet.