On support relations and semantic scene graphs

Michael Ying Yang, Wentong Liao, Hanno Ackermann, Bodo Rosenhahn

Research output: Contribution to journalArticleAcademicpeer-review

34 Citations (Scopus)
28 Downloads (Pure)

Abstract

Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
Original languageEnglish
Pages (from-to)15-25
JournalISPRS journal of photogrammetry and remote sensing
Volume131
DOIs
Publication statusPublished - 1 Sept 2017

Keywords

  • ITC-ISI-JOURNAL-ARTICLE
  • 2023 OA procedure

Fingerprint

Dive into the research topics of 'On support relations and semantic scene graphs'. Together they form a unique fingerprint.

Cite this