Contrastive Visual and Language Learning for Visual Relationship Detection

Thanh Tran, Maëlic Neau, Paulo E. Santos, David Powers

Research output: Contribution to conferencePaperpeer-review

Abstract


Visual Relationship Detection aims to understand real-world objects’ interactions by grounding visual concepts to compositional visual relation triples, written in the form of (subject, predicate, object). Previous works have explored the use of contrastive learning to implicitly predict the predicates from the relevant image regions. However, these models often directly leverage in-distribution spatial and language co-occurrences biases during training, preventing the models from generalizing to out-of-distribution compositions. In this work, we examine whether contrastive vision and language models pre-trained on large-scale external image and text dataset can assist the detection of compositional visual relationships. To this end, we propose a semi-supervised contrastive fine-tuning approach for the visual relationship detection task. The results show that fine-tuned models that were pre-trained on larger datasets do not yield better performance when performing visual relationship detection, and larger models can yield lower performance when compared with their smaller counterparts.
Original languageEnglish
Number of pages8
Publication statusPublished - Dec 2022
Event20th Annual Workshop of the Australasian Language Technology Association - Adelaide, Australia
Duration: 14 Dec 202216 Dec 2022

Conference

Conference20th Annual Workshop of the Australasian Language Technology Association
Abbreviated titleALTA 2022
Country/TerritoryAustralia
CityAdelaide
Period14/12/2216/12/22

Keywords

  • Contrastive Visual and Language Learning
  • Visual Relationship Detection
  • Visual Question Answering (VQA)

Fingerprint

Dive into the research topics of 'Contrastive Visual and Language Learning for Visual Relationship Detection'. Together they form a unique fingerprint.

Cite this