Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation

Michele Scandola, Emily S. Cross, Nathan Caruana, Emmanuele Tidoni

Research output: Contribution to journalArticlepeer-review

44 Downloads (Pure)

Abstract

The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration.

Original languageEnglish
Pages (from-to)1365-1385
Number of pages21
JournalInternational Journal of Social Robotics
Volume15
Issue number8
Early online date24 Jan 2023
DOIs
Publication statusPublished - Aug 2023
Externally publishedYes

Keywords

  • Action prediction
  • Body perception
  • Gaze perception
  • Human–robot interaction
  • Mentalising

Fingerprint

Dive into the research topics of 'Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation'. Together they form a unique fingerprint.

Cite this