Explaining AI decisions

Context

As artificial intelligence (AI) becomes more ubiquitous and the consequences of its decision-making become more significant, people want to understand how and why AI systems make decisions in order to assess the correctness and fairness of those decisions.

A hand points to a screen with a number of icons connected via networks.

Solution

Truly explainable AI requires integration of the technical and human challenges to address both the cognitive and social and technical spectrum: how people understand and evaluate explanations, and how to automatically generate and evaluate explanations to match these.

In research funded by the Next Generation Technologies Fund the Defence Science and Technology Group we used existing theories of how people understand complex situations to devise an explainability model. This allows the decision-making processes of an AI system to be extracted and presented as explanatory information to military commanders.

We assessed our model in a series of experiments, which showed that people performed better on a resource allocation task assisted by an AI planning agent, and trusted the AI agent better, when given our AI decision-making explanations, compared to an existing baseline model where no explanations were provided.

Impact

This has highlighted the importance of explainability in improving human performance and trust in autonomous systems. It has led to a further three-year project with the Defence Science and Technology Group to investigate explainability in complex maritime scenarios, in which the reasons why decisions are made are key to helping intelligence analysts assess the trustworthiness of a decision and the information it is founded on.

Lead researcher

Assoc Prof Tim Miller
tmiller@unimelb.edu.au