On support relations and semantic scene graphs

Michael Ying Yang, Wentong Liao, Hanno Ackermann, Bodo Rosenhahn

Research output: Contribution to journalArticleAcademicpeer-review

5 Citations (Scopus)

Abstract

Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.
Original languageEnglish
Pages (from-to)15-25
JournalISPRS journal of photogrammetry and remote sensing
Volume131
DOIs
Publication statusPublished - 1 Sep 2017

Fingerprint

semantics
Semantics
Photogrammetry
Computer vision
Pixels
Neural networks
computer vision
photogrammetry
ground truth
segmentation
pixel
Experiments
pixels
method
experiment
indoor environment

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Cite this

Yang, Michael Ying ; Liao, Wentong ; Ackermann, Hanno ; Rosenhahn, Bodo. / On support relations and semantic scene graphs. In: ISPRS journal of photogrammetry and remote sensing. 2017 ; Vol. 131. pp. 15-25.
@article{f8c079f5a8e54a9c8abe04690be41d55,
title = "On support relations and semantic scene graphs",
abstract = "Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.",
keywords = "ITC-ISI-JOURNAL-ARTICLE",
author = "Yang, {Michael Ying} and Wentong Liao and Hanno Ackermann and Bodo Rosenhahn",
year = "2017",
month = "9",
day = "1",
doi = "10.1016/j.isprsjprs.2017.07.010",
language = "English",
volume = "131",
pages = "15--25",
journal = "ISPRS journal of photogrammetry and remote sensing",
issn = "0924-2716",
publisher = "Elsevier",

}

On support relations and semantic scene graphs. / Yang, Michael Ying; Liao, Wentong; Ackermann, Hanno; Rosenhahn, Bodo.

In: ISPRS journal of photogrammetry and remote sensing, Vol. 131, 01.09.2017, p. 15-25.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - On support relations and semantic scene graphs

AU - Yang, Michael Ying

AU - Liao, Wentong

AU - Ackermann, Hanno

AU - Rosenhahn, Bodo

PY - 2017/9/1

Y1 - 2017/9/1

N2 - Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.

AB - Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.

KW - ITC-ISI-JOURNAL-ARTICLE

UR - https://ezproxy2.utwente.nl/login?url=https://webapps.itc.utwente.nl/library/2017/isi/yang_sup.pdf

U2 - 10.1016/j.isprsjprs.2017.07.010

DO - 10.1016/j.isprsjprs.2017.07.010

M3 - Article

VL - 131

SP - 15

EP - 25

JO - ISPRS journal of photogrammetry and remote sensing

JF - ISPRS journal of photogrammetry and remote sensing

SN - 0924-2716

ER -