Temporally consistent horizon lines

Florian Kluger, Hanno Ackermann, Michael Ying Yang*, Bodo Rosenhahn

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

3 Citations (Scopus)

Abstract

The horizon line is an important geometric feature for many image processing and scene understanding tasks in computer vision. For instance, in navigation of autonomous vehicles or driver assistance, it can be used to improve 3D reconstruction as well as for semantic interpretation of dynamic environments. While both algorithms and datasets exist for single images, the problem of horizon line estimation from video sequences has not gained attention. In this paper, we show how convolutional neural networks are able to utilise the temporal consistency imposed by video sequences in order to increase the accuracy and reduce the variance of horizon line estimates. A novel CNN architecture with an improved residual convolutional LSTM is presented for temporally consistent horizon line estimation. We propose an adaptive loss function that ensures stable training as well as accurate results. Furthermore, we introduce an extension of the KITTI dataset which contains precise horizon line labels for 43699 images across 72 video sequences. A comprehensive evaluation shows that the proposed approach consistently achieves superior performance compared with existing methods.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Place of PublicationParis
PublisherIEEE
Pages3161-3167
Number of pages7
ISBN (Electronic)9781728173955
ISBN (Print)978-1-7281-7396-2
DOIs
Publication statusPublished - 31 May 2020

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Fingerprint Dive into the research topics of 'Temporally consistent horizon lines'. Together they form a unique fingerprint.

Cite this