Exploring the Semantics for Visual Relationship Detection

Wentong Liao (Corresponding Author), Cuiling Lan, Wenjun Zeng, Michael Ying Yang, Bodo Rosenhahn

Research output: Contribution to journalArticleAcademic

6 Downloads (Pure)

Abstract

Scene graph construction / visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2\% in mAP accuracy).
Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalArxiv.org
Publication statusPublished - 3 Apr 2019

Fingerprint

Semantics
Genes
Neural networks
Object detection

Keywords

  • cs.CV
  • ITC-GOLD

Cite this

Liao, W., Lan, C., Zeng, W., Yang, M. Y., & Rosenhahn, B. (2019). Exploring the Semantics for Visual Relationship Detection. Arxiv.org, 1-13.
Liao, Wentong ; Lan, Cuiling ; Zeng, Wenjun ; Yang, Michael Ying ; Rosenhahn, Bodo. / Exploring the Semantics for Visual Relationship Detection. In: Arxiv.org. 2019 ; pp. 1-13.
@article{5bfa24f43fa143c38c20a103fe388e81,
title = "Exploring the Semantics for Visual Relationship Detection",
abstract = "Scene graph construction / visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2\{\%} in mAP accuracy).",
keywords = "cs.CV, ITC-GOLD",
author = "Wentong Liao and Cuiling Lan and Wenjun Zeng and Yang, {Michael Ying} and Bodo Rosenhahn",
year = "2019",
month = "4",
day = "3",
language = "English",
pages = "1--13",
journal = "Arxiv.org",
publisher = "Cornell University",

}

Liao, W, Lan, C, Zeng, W, Yang, MY & Rosenhahn, B 2019, 'Exploring the Semantics for Visual Relationship Detection' Arxiv.org, pp. 1-13.

Exploring the Semantics for Visual Relationship Detection. / Liao, Wentong (Corresponding Author); Lan, Cuiling; Zeng, Wenjun; Yang, Michael Ying; Rosenhahn, Bodo.

In: Arxiv.org, 03.04.2019, p. 1-13.

Research output: Contribution to journalArticleAcademic

TY - JOUR

T1 - Exploring the Semantics for Visual Relationship Detection

AU - Liao, Wentong

AU - Lan, Cuiling

AU - Zeng, Wenjun

AU - Yang, Michael Ying

AU - Rosenhahn, Bodo

PY - 2019/4/3

Y1 - 2019/4/3

N2 - Scene graph construction / visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2\% in mAP accuracy).

AB - Scene graph construction / visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2\% in mAP accuracy).

KW - cs.CV

KW - ITC-GOLD

UR - https://ezproxy2.utwente.nl/login?url=https://library.itc.utwente.nl/login/2019/scie/yang_exp.pdf

M3 - Article

SP - 1

EP - 13

JO - Arxiv.org

JF - Arxiv.org

ER -

Liao W, Lan C, Zeng W, Yang MY, Rosenhahn B. Exploring the Semantics for Visual Relationship Detection. Arxiv.org. 2019 Apr 3;1-13.