The present work addresses the challenge of integrating low-level information with high-level knowledge (known as semantic gap) that exists in content-based image retrieval by introducing an approach to describe images by means of spatial relations. The proposed approach is called Image Retrieval using Region Analysis (IRRA) and relies on decomposing images into pairs of objects. This method generates a representation composed of n triples, each one containing: a noun, a preposition and, another noun. This representation paves the way to enable image retrieval based on spatial relations. Results for an indoor/outdoor classifier shows that neural networks alone are capable of achieving 88% in precision and recall, but when combined with ontology this result increases in 10 percentage points, reaching 98% of precision and recall.
|Journal||CEUR Workshop Proceedings|
|Publication status||Published - 1 Jan 2019|
|Event||2019 AAAI Spring Symposium on Combining Machine Learning with Knowledge Engineering, AAAI-MAKE 2019 - Palo Alto, United States|
Duration: 25 Mar 2019 → 27 Mar 2019
- Image Retrieval Region Analysis (IRRA)
- spatial relations