Can ground truth label propagation from video help semantic segmentation?

Siva Karthik Mustikovela, Michael Ying Yang, Carsten Rother

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)

Abstract

For state-of-the-art semantic segmentation task, training convolutional neural networks (CNNs) requires dense pixelwise ground truth (GT) labeling, which is expensive and involves extensive human effort. In this work, we study the possibility of using auxiliary ground truth, so-called pseudo ground truth (PGT) to improve the performance. The PGT is obtained by propagating the labels of a GT frame to its subsequent frames in the video using a simple CRF-based, cue integration framework. Our main contribution is to demonstrate the use of noisy PGT along with GT to improve the performance of a CNN. We perform a systematic analysis to find the right kind of PGT that needs to be added along with the GT for training a CNN. In this regard, we explore three aspects of PGT which influence the learning of a CNN: (i) the PGT labeling has to be of good quality; (ii) the PGT images have to be different compared to the GT images; (iii) the PGT has to be trusted differently than GT. We conclude that PGT which is diverse from GT images and has good quality of labeling can indeed help improve the performance of a CNN. Also, when PGT is multiple folds larger than GT, weighing down the trust on PGT helps in improving the accuracy. Finally, We show that using PGT along with GT, the performance of Fully Convolutional Network (FCN) on Camvid data is increased by 2.7% on IoU accuracy. We believe such an approach can be used to train CNNs for semantic video segmentation where sequentially labeled image frames are needed. To this end, we provide recommendations for using PGT strategically for semantic segmentation and hence bypass the need for extensive human efforts in labeling.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2016 Workshops, Proceedings
EditorsGang Hua, Herve Jegou
PublisherSpringer
Pages804-820
Number of pages17
ISBN (Print)9783319494081
DOIs
Publication statusPublished - 1 Jan 2016
Event14th European Conference on Computer Vision, ECCV 2016 - Amsterdam, Netherlands
Duration: 8 Oct 201616 Oct 2016
Conference number: 14

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9915 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference14th European Conference on Computer Vision, ECCV 2016
Abbreviated titleECCV 2016
CountryNetherlands
CityAmsterdam
Period8/10/1616/10/16

Fingerprint

Labels
Segmentation
Semantics
Propagation
Neural networks
Labeling
Neural Networks
Weighing
Truth
Video Segmentation

Cite this

Mustikovela, S. K., Yang, M. Y., & Rother, C. (2016). Can ground truth label propagation from video help semantic segmentation? In G. Hua, & H. Jegou (Eds.), Computer Vision – ECCV 2016 Workshops, Proceedings (pp. 804-820). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9915 LNCS). Springer. https://doi.org/10.1007/978-3-319-49409-8_66
Mustikovela, Siva Karthik ; Yang, Michael Ying ; Rother, Carsten. / Can ground truth label propagation from video help semantic segmentation?. Computer Vision – ECCV 2016 Workshops, Proceedings. editor / Gang Hua ; Herve Jegou. Springer, 2016. pp. 804-820 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{e50471ed5bc540bfb431d5803fb4da67,
title = "Can ground truth label propagation from video help semantic segmentation?",
abstract = "For state-of-the-art semantic segmentation task, training convolutional neural networks (CNNs) requires dense pixelwise ground truth (GT) labeling, which is expensive and involves extensive human effort. In this work, we study the possibility of using auxiliary ground truth, so-called pseudo ground truth (PGT) to improve the performance. The PGT is obtained by propagating the labels of a GT frame to its subsequent frames in the video using a simple CRF-based, cue integration framework. Our main contribution is to demonstrate the use of noisy PGT along with GT to improve the performance of a CNN. We perform a systematic analysis to find the right kind of PGT that needs to be added along with the GT for training a CNN. In this regard, we explore three aspects of PGT which influence the learning of a CNN: (i) the PGT labeling has to be of good quality; (ii) the PGT images have to be different compared to the GT images; (iii) the PGT has to be trusted differently than GT. We conclude that PGT which is diverse from GT images and has good quality of labeling can indeed help improve the performance of a CNN. Also, when PGT is multiple folds larger than GT, weighing down the trust on PGT helps in improving the accuracy. Finally, We show that using PGT along with GT, the performance of Fully Convolutional Network (FCN) on Camvid data is increased by 2.7{\%} on IoU accuracy. We believe such an approach can be used to train CNNs for semantic video segmentation where sequentially labeled image frames are needed. To this end, we provide recommendations for using PGT strategically for semantic segmentation and hence bypass the need for extensive human efforts in labeling.",
author = "Mustikovela, {Siva Karthik} and Yang, {Michael Ying} and Carsten Rother",
year = "2016",
month = "1",
day = "1",
doi = "10.1007/978-3-319-49409-8_66",
language = "English",
isbn = "9783319494081",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "804--820",
editor = "Gang Hua and Herve Jegou",
booktitle = "Computer Vision – ECCV 2016 Workshops, Proceedings",

}

Mustikovela, SK, Yang, MY & Rother, C 2016, Can ground truth label propagation from video help semantic segmentation? in G Hua & H Jegou (eds), Computer Vision – ECCV 2016 Workshops, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9915 LNCS, Springer, pp. 804-820, 14th European Conference on Computer Vision, ECCV 2016, Amsterdam, Netherlands, 8/10/16. https://doi.org/10.1007/978-3-319-49409-8_66

Can ground truth label propagation from video help semantic segmentation? / Mustikovela, Siva Karthik; Yang, Michael Ying; Rother, Carsten.

Computer Vision – ECCV 2016 Workshops, Proceedings. ed. / Gang Hua; Herve Jegou. Springer, 2016. p. 804-820 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9915 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Can ground truth label propagation from video help semantic segmentation?

AU - Mustikovela, Siva Karthik

AU - Yang, Michael Ying

AU - Rother, Carsten

PY - 2016/1/1

Y1 - 2016/1/1

N2 - For state-of-the-art semantic segmentation task, training convolutional neural networks (CNNs) requires dense pixelwise ground truth (GT) labeling, which is expensive and involves extensive human effort. In this work, we study the possibility of using auxiliary ground truth, so-called pseudo ground truth (PGT) to improve the performance. The PGT is obtained by propagating the labels of a GT frame to its subsequent frames in the video using a simple CRF-based, cue integration framework. Our main contribution is to demonstrate the use of noisy PGT along with GT to improve the performance of a CNN. We perform a systematic analysis to find the right kind of PGT that needs to be added along with the GT for training a CNN. In this regard, we explore three aspects of PGT which influence the learning of a CNN: (i) the PGT labeling has to be of good quality; (ii) the PGT images have to be different compared to the GT images; (iii) the PGT has to be trusted differently than GT. We conclude that PGT which is diverse from GT images and has good quality of labeling can indeed help improve the performance of a CNN. Also, when PGT is multiple folds larger than GT, weighing down the trust on PGT helps in improving the accuracy. Finally, We show that using PGT along with GT, the performance of Fully Convolutional Network (FCN) on Camvid data is increased by 2.7% on IoU accuracy. We believe such an approach can be used to train CNNs for semantic video segmentation where sequentially labeled image frames are needed. To this end, we provide recommendations for using PGT strategically for semantic segmentation and hence bypass the need for extensive human efforts in labeling.

AB - For state-of-the-art semantic segmentation task, training convolutional neural networks (CNNs) requires dense pixelwise ground truth (GT) labeling, which is expensive and involves extensive human effort. In this work, we study the possibility of using auxiliary ground truth, so-called pseudo ground truth (PGT) to improve the performance. The PGT is obtained by propagating the labels of a GT frame to its subsequent frames in the video using a simple CRF-based, cue integration framework. Our main contribution is to demonstrate the use of noisy PGT along with GT to improve the performance of a CNN. We perform a systematic analysis to find the right kind of PGT that needs to be added along with the GT for training a CNN. In this regard, we explore three aspects of PGT which influence the learning of a CNN: (i) the PGT labeling has to be of good quality; (ii) the PGT images have to be different compared to the GT images; (iii) the PGT has to be trusted differently than GT. We conclude that PGT which is diverse from GT images and has good quality of labeling can indeed help improve the performance of a CNN. Also, when PGT is multiple folds larger than GT, weighing down the trust on PGT helps in improving the accuracy. Finally, We show that using PGT along with GT, the performance of Fully Convolutional Network (FCN) on Camvid data is increased by 2.7% on IoU accuracy. We believe such an approach can be used to train CNNs for semantic video segmentation where sequentially labeled image frames are needed. To this end, we provide recommendations for using PGT strategically for semantic segmentation and hence bypass the need for extensive human efforts in labeling.

UR - https://ezproxy2.utwente.nl/login?url=https://library.itc.utwente.nl/login/2016/chap/yang_can.pdf

U2 - 10.1007/978-3-319-49409-8_66

DO - 10.1007/978-3-319-49409-8_66

M3 - Conference contribution

SN - 9783319494081

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 804

EP - 820

BT - Computer Vision – ECCV 2016 Workshops, Proceedings

A2 - Hua, Gang

A2 - Jegou, Herve

PB - Springer

ER -

Mustikovela SK, Yang MY, Rother C. Can ground truth label propagation from video help semantic segmentation? In Hua G, Jegou H, editors, Computer Vision – ECCV 2016 Workshops, Proceedings. Springer. 2016. p. 804-820. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-49409-8_66