Explainability in AI

A glowing computer chip with the letters AI

Trustworthiness is a key concept when it comes to artificial intelligence and machine learning. Trustworthy AI systems can be relied upon to make robust, ethical and legal decisions. However, addressing ethical and societal concerns around the use of AI has not kept pace with the rapid evolution of AI technologies.

Due to these concerns, a number of socio-technical principles have emerged to help guide the development of trustworthy AI systems. In this course, we take a closer look at the socio-technical principle of explainability, which means providing clear and coherent explanations for specific model predictions or decisions.

This course is targeted at people whose organisations use, or are considering using, AI in their in-house systems and processes. It is primarily for people who need to understand basic concepts and terminology relating to explainable AI systems.

A basic understanding of AI and machine learning principles is recommended, but no prior knowledge of AI is assumed.

Learning Outcomes

At the end of this course, learners will be able to:

  • Understand the various metrics for trustworthy and safe AI systems
  • Learn about the importance of explainability in AI systems, and its benefits
  • Learn about various aspects and strategies of evaluation and measurement of explainability
  • Understand the risks and trade-offs associated with the different metrics for trustworthy and safe AI systems.