The project examines how the timing of decisions by a computer-agent can lead humans to misjudge the intention and mispredict the action of that agent. Humans prefer interacting with and show increased trust in automated agents who display human-like variability. Introducing human-like timing into driverless vehicles could also improve road safety where human must predict and react to the actions taken by autonomous cars. The goal of this project is to determine how sensitive people are to the human-like timing and then to determine how deviations from this timing affect prediction. We aim to determine whether predicting a simulated driverless car’s actions is improved by making the timing more human-like.
- Daniel Little, Associate Professor, Melbourne School of Psychological Sciences