Abstract
Human-Robot Interaction (HRI) is an emerging subfield of service robotics. While most existing approaches rely on explicit signals (i.e. voice, gesture) to engage, current literature is lacking solutions to address implicit user needs. In this paper, we present an architecture to (a) detect user implicit need of help and (b) generate a set of assistive actions without prior learning. Task (a) will be performed using state-of-the-art solutions for Scene Graph Generation coupled to the use of commonsense knowledge; whereas, task (b) will be performed using additional commonsense knowledge as well as a sentiment analysis on graph structure. Finally, we propose an evaluation of our solution using established benchmarks (e.g. ActionGenome dataset) along with human experiments. The main motivation of our approach is the embedding of the perception-decision-action loop in a single architecture.
Original language | English |
---|---|
Number of pages | 9 |
Journal | CEUR Workshop Proceedings |
Volume | 3121 |
DOIs | |
Publication status | Published - Mar 2022 |
Event | AAAI 2022 Spring Symposium on Machine Learning and Knowledge Engineering for Hybrid Intelligence, AAAI-MAKE 2022 - Stanford University, Palo Alto, United States Duration: 21 Mar 2022 → 23 Mar 2022 |
Keywords
- Cognitive Robotics
- Commonsense Reasoning
- Knowledge Graph
- Vision-to-Language