Abstract
Visual Relationship Detection aims to understand real-world objects’ interactions by grounding visual concepts to compositional visual relation triples, written in the form of (subject, predicate, object). Previous works have explored the use of contrastive learning to implicitly predict the predicates from the relevant image regions. However, these models often directly leverage in-distribution spatial and language co-occurrences biases during training, preventing the models from generalizing to out-of-distribution compositions. In this work, we examine whether contrastive vision and language models pre-trained on large-scale external image and text dataset can assist the detection of compositional visual relationships. To this end, we propose a semi-supervised contrastive fine-tuning approach for the visual relationship detection task. The results show that fine-tuned models that were pre-trained on larger datasets do not yield better performance when performing visual relationship detection, and larger models can yield lower performance when compared with their smaller counterparts.
Original language | English |
---|---|
Number of pages | 8 |
Publication status | Published - Dec 2022 |
Event | 20th Annual Workshop of the Australasian Language Technology Association - Adelaide, Australia Duration: 14 Dec 2022 → 16 Dec 2022 |
Conference
Conference | 20th Annual Workshop of the Australasian Language Technology Association |
---|---|
Abbreviated title | ALTA 2022 |
Country/Territory | Australia |
City | Adelaide |
Period | 14/12/22 → 16/12/22 |
Keywords
- Contrastive Visual and Language Learning
- Visual Relationship Detection
- Visual Question Answering (VQA)