Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework

Xin Huang (Corresponding Author), Zerun Zhu, Yansheng Li, Bo Wu, Michael Yang

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN.
Original languageEnglish
Pages (from-to)723-731
Number of pages9
JournalPhotogrammetric engineering and remote sensing : PE&RS
Volume84
Issue number11
DOIs
Publication statusPublished - 1 Nov 2018

Fingerprint

tea
garden
imagery
Neural networks
pixel
Pixels
pillar
detection
Tea
fieldwork
method
Semantics
industry
history
Industry

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Cite this

@article{2cc4e36ea28b4a8899497ccedb213db9,
title = "Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework",
abstract = "Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN.",
keywords = "ITC-ISI-JOURNAL-ARTICLE",
author = "Xin Huang and Zerun Zhu and Yansheng Li and Bo Wu and Michael Yang",
year = "2018",
month = "11",
day = "1",
doi = "10.14358/PERS.84.11.723",
language = "English",
volume = "84",
pages = "723--731",
journal = "Photogrammetric engineering and remote sensing : PE&RS",
issn = "0099-1112",
publisher = "American Society for Photogrammetry and Remote Sensing",
number = "11",

}

Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework. / Huang, Xin (Corresponding Author); Zhu, Zerun; Li, Yansheng; Wu, Bo; Yang, Michael.

In: Photogrammetric engineering and remote sensing : PE&RS, Vol. 84, No. 11, 01.11.2018, p. 723-731.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework

AU - Huang, Xin

AU - Zhu, Zerun

AU - Li, Yansheng

AU - Wu, Bo

AU - Yang, Michael

PY - 2018/11/1

Y1 - 2018/11/1

N2 - Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN.

AB - Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN.

KW - ITC-ISI-JOURNAL-ARTICLE

UR - https://ezproxy2.utwente.nl/login?url=https://webapps.itc.utwente.nl/library/2018/isi/yang_tea.pdf

U2 - 10.14358/PERS.84.11.723

DO - 10.14358/PERS.84.11.723

M3 - Article

VL - 84

SP - 723

EP - 731

JO - Photogrammetric engineering and remote sensing : PE&RS

JF - Photogrammetric engineering and remote sensing : PE&RS

SN - 0099-1112

IS - 11

ER -