
Trustworthiness is a key concept when it comes to artificial intelligence and machine learning. Trustworthy AI systems can be relied upon to make robust, ethical and legal decisions. However, addressing ethical and societal concerns around the use of AI has not kept pace with the rapid evolution of AI technologies.
Due to these concerns, a number of socio-technical principles have emerged to help guide the development of trustworthy AI systems. In this course, we take a closer look at the socio-technical principle of explainability, which means providing clear and coherent explanations for specific model predictions or decisions.
This course is targeted at people whose organisations use, or are considering using, AI in their in-house systems and processes. It is primarily for people who need to understand basic concepts and terminology relating to explainable AI systems.
A basic understanding of AI and machine learning principles is recommended, but no prior knowledge of AI is assumed.
At the end of this course, learners will be able to:
Privacy Policy | Cookie Policy
© NPL Management Limited 2024 | Hampton Road, Teddington, Middlesex, TW11 0LW | Tel: 020 8977 3222