Abstract
The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration.
Original language | English |
---|---|
Pages (from-to) | 1365-1385 |
Number of pages | 21 |
Journal | International Journal of Social Robotics |
Volume | 15 |
Issue number | 8 |
Early online date | 24 Jan 2023 |
DOIs | |
Publication status | Published - Aug 2023 |
Externally published | Yes |
Keywords
- Action prediction
- Body perception
- Gaze perception
- Human–robot interaction
- Mentalising