Due to the lapse in federal funding, this website will not be actively managed. More info.
Many problems aren't fully observable and have some degree of uncertainty, which is challenging for AI to solve. In this course, you will learn how to make agents deal with uncertainty and make the best decisions.
start the course
describe uncertainty and how it applies to AI
describe how probability theory is used to represent knowledge to help an intelligent make decisions
Understanding Utility Theory
describe utility theory and how an agent can calculate expected utility of decisions
describe how preferences are involved in decision making and how the same problem can have different utility functions with different agents
describe how risks are taken into consideration when calculating utility and how attitude for risks can change the utility function
describe the utility of information gain and how information gain can influence decisions
Examining the Markov Decision Process
define Markov chains
define the Markov Decision Process and how it applies to AI
describe the value iteration algorithm to decide on an optimal policy for a Markov Decision Process
define the partially observable Markov Decision Process and contrast it with a regular Markov Decision Process
describe how the value iteration algorithm is used with the partially observable Markov Decision Process
describe how a partially observable Markov Decision Process can be implemented with an intelligent agent
Practice: Markov Decision Process
describe the Markov Decision Process and how it can be used by an intelligent agent
The materials within this course focus on the Knowledge Skills and Abilities (KSAs) identified within the Specialty Areas listed below. Click to view Specialty Area details within the interactive National Cybersecurity Workforce Framework.