Representation and retrieval of images by means of spatial relations between objects

Danilo Nunes, Leonardo Anjoletto Ferreira, Paulo Eduardo Santos, Adam Pease

Research output: Contribution to journalConference articlepeer-review

35 Downloads (Pure)

Abstract

The present work addresses the challenge of integrating low-level information with high-level knowledge (known as semantic gap) that exists in content-based image retrieval by introducing an approach to describe images by means of spatial relations. The proposed approach is called Image Retrieval using Region Analysis (IRRA) and relies on decomposing images into pairs of objects. This method generates a representation composed of n triples, each one containing: a noun, a preposition and, another noun. This representation paves the way to enable image retrieval based on spatial relations. Results for an indoor/outdoor classifier shows that neural networks alone are capable of achieving 88% in precision and recall, but when combined with ontology this result increases in 10 percentage points, reaching 98% of precision and recall.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2350
Publication statusPublished - 1 Jan 2019
Externally publishedYes
Event2019 AAAI Spring Symposium on Combining Machine Learning with Knowledge Engineering, AAAI-MAKE 2019 - Palo Alto, United States
Duration: 25 Mar 201927 Mar 2019

Keywords

  • Image Retrieval Region Analysis (IRRA)
  • spatial relations

Fingerprint

Dive into the research topics of 'Representation and retrieval of images by means of spatial relations between objects'. Together they form a unique fingerprint.

Cite this