TY - GEN
T1 - Target-tailored source-transformation for scene graph generation
AU - Liao, Wentong
AU - Lan, Cuiling
AU - Yang, Michael Ying
AU - Zeng, Wenjun
AU - Rosenhahn, Bodo
N1 - Funding Information:
This work was supported by the Center for Digital Innovations (ZDIN), Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILa-bor(grant no.01DD20003) and the Deutsche Forschungs-gemeinschaft (DFG) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122).
Publisher Copyright:
© 2021 IEEE.
PY - 2021/6/19
Y1 - 2021/6/19
N2 - Scene graph generation aims to provide a semantic and structural description of an image, denoting the objects (with nodes) and their relationships (with edges). The best performing works to date are based on exploiting the context surrounding objects or relations, e.g., by passing information among objects. In these approaches, to transform the representation of source objects is a critical process for extracting information for the use by target objects. In this paper, we argue that a source object should give what target object needs and give different objects different information rather than contributing common information to all targets. To achieve this goal, we propose a Target-Tailored Source-Transformation (TTST) method to propagate information among object proposals and relations. Particularly, for a source object proposal which will contribute information to other target objects, we transform the source object feature to the target object feature domain by simultaneously taking both the source and target into account. We further explore more powerful representation by integrating language prior with visual context in the transformation for scene graph generation. By doing so the target object is able to extract target-specific information from source object and source relation accordingly to refine its representation. Our framework is validated on the Visual Genome benchmark and demonstrated its state-of-the-art performance for the scene graph generation. The experimental results show that the performance of object detection and visual relationship detection are promoted mutually by our method. The code will be released upon acceptance.
AB - Scene graph generation aims to provide a semantic and structural description of an image, denoting the objects (with nodes) and their relationships (with edges). The best performing works to date are based on exploiting the context surrounding objects or relations, e.g., by passing information among objects. In these approaches, to transform the representation of source objects is a critical process for extracting information for the use by target objects. In this paper, we argue that a source object should give what target object needs and give different objects different information rather than contributing common information to all targets. To achieve this goal, we propose a Target-Tailored Source-Transformation (TTST) method to propagate information among object proposals and relations. Particularly, for a source object proposal which will contribute information to other target objects, we transform the source object feature to the target object feature domain by simultaneously taking both the source and target into account. We further explore more powerful representation by integrating language prior with visual context in the transformation for scene graph generation. By doing so the target object is able to extract target-specific information from source object and source relation accordingly to refine its representation. Our framework is validated on the Visual Genome benchmark and demonstrated its state-of-the-art performance for the scene graph generation. The experimental results show that the performance of object detection and visual relationship detection are promoted mutually by our method. The code will be released upon acceptance.
UR - https://openaccess.thecvf.com/content/CVPR2021W/MULA/html/Liao_Target-Tailored_Source-Transformation_for_Scene_Graph_Generation_CVPRW_2021_paper.html
U2 - 10.1109/CVPRW53098.2021.00182
DO - 10.1109/CVPRW53098.2021.00182
M3 - Conference contribution
SP - 1663
EP - 1671
BT - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
PB - IEEE
T2 - IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021
Y2 - 19 June 2021 through 25 June 2021
ER -