Estimating layout of cluttered indoor scenes using trajectory-based priors

Muhammad Shoaib (Corresponding Author), Michael Ying Yang, Bodo Rosenhahn, Joern Ostermann

Research output: Contribution to journalArticleAcademicpeer-review


Given a surveillance video of a moving person, we present a novel method of estimating layout of a cluttered indoor scene. We propose an idea that trajectories of a moving person can be used to generate features to segment an indoor scene into different areas of interest. We assume a static uncalibrated camera. Using pixel-level color and perspective cues of the scene, each pixel is assigned to a particular class either a sitting place, the ground floor, or the static background areas like walls and ceiling. The pixel-level cues are locally integrated along global topological order of classes, such as sitting objects and background areas are above ground floor into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes. We focus on videos with people walking in the scene and show the effectiveness of our approach through quantitative and qualitative results. The proposed estimation method shows better estimation results as compared to the state of the art scene layout estimation methods. We are able to correctly segment 90.3% of background, 89.4% of sitting areas and 74.7% of the ground floor.
Original languageEnglish
Pages (from-to)870-883
Number of pages14
JournalImage and vision computing
Issue number11
Early online date27 Jul 2014
Publication statusPublished - 1 Nov 2014
Externally publishedYes




Dive into the research topics of 'Estimating layout of cluttered indoor scenes using trajectory-based priors'. Together they form a unique fingerprint.

Cite this