Description
The ability to perceive space and reason about spatio-temporal relations is effortless for humans, but it has proved to be a challenge to computational systems that struggle to process the various nuances of our conceptualisation of the world. This talk presents two results on the development of qualitative spatial reasoning tools applied to multi-robot systems that aim to bridge the gap between the human and the machine way of interpreting and acting upon the external world. The first is a novel algorithm for Qualitative Case-Based Reasoning and Learning (QCBRL), which is a case-based reasoning system that uses qualitative spatial representations to retrieve and reuse cases by means of relations between objects in the environment. Combined with reinforcement learning, QCBRL allows the agent to learn new qualitative cases at runtime, without assuming a pre-processing step. Experimental evaluation of QCBRL was conducted in a simulated robot-soccer environment and in a real humanoid-robot. Results show that QCBRL outperforms traditional RL methods and state-of-the-art CBR systems. The second result is an algorithm for combining the information obtained from multiple (distinct and egocentric) viewpoints to infer the pose, the route and the actions to guide a sensory deprived agent toward a goal destination. The information from the multiple observers were fused in terms of a set of qualitative directions that could be easily interpreted by a human agent, but that can also be easily translated to low-level robot actions. Experimental evaluation was also conducted in a simulated and real robot-soccer environment.Period | 1 Oct 2022 |
---|---|
Held at | IEEE Computational Intelligence Society (IEEE CIS) |
Degree of Recognition | International |