This course is your perfect entry point into the exciting field of Reinforcement Learning where digital Artificial Intelligence
agents are built to automatically learn how to make sequential decisions through trial anderror.
Specifically, this course focuses on the Multi Armed Bandit problems and the practical hands on implementation of various
algorithmic strategies for balancing between exploration and exploitation. Whenever you desire to consistently make the best
choice out of a limited number of options over time, you are dealing with a Multi Armed Bandit problem and this course teaches
you every detail you need to know to be able to build realistic business agents to handle such situations.
With very concise explanations, this course teaches you how to confidently translate seemingly scary mathematical formulas into
Python code painlessly. We understand that not many of us are technically adept in the subject of mathematics so this course
intentionally stays away from maths unless it is necessary. And even when it becomes necessary to talk about mathematics,
the approach taken in this course is such that anyone with basic algebra skills can understand and most importantly easily
translate the maths into code and build useful intuitions in the process.
Some of the algorithmic strategies taught in this course are Epsilon Greedy, Softmax Exploration, Optimistic Initialization,
Upper Confidence Bounds, and Thompson Sampling. With these tools under your belt, you are adequately equipped to readily
build and deploy AI agents that can handle critical business operations under uncertainties.
–
There are no reviews yet.