Risk Management Frameworks for Trustworthy AI

When it comes to artificial intelligence and machine learning, a key concept is trustworthiness. Trustworthy AI systems can be relied upon to make robust, ethical and legal decisions.

If done right, artificial intelligence has the potential to benefit societies, organisations and individuals, by fundamentally changing how we collect and process data. However, addressing ethical and societal concerns around the use of AI has not kept pace with the rapid evolution of AI technologies.

In this short course, we will discuss:

  • The definition of risk in the context of AI systems
  • The general approach to risk management
  • The unique risks and challenges posed by AI technologies
  • The scope and purpose of risk management frameworks for AI

Learning Outcomes

  • Describe risk in the context of AI systems
  • Become familiar with the general approach to risk management
  • Identify unique risks and challenges posed by AI technologies
  • Summarise the scope and purpose of risk management frameworks for AI