Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework

Xin Huang (Corresponding Author), Zerun Zhu, Yansheng Li, Bo Wu, Michael Yang

Research output: Contribution to journalArticleAcademicpeer-review

28 Downloads (Pure)

Abstract

Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN.
Original languageEnglish
Pages (from-to)723-731
Number of pages9
JournalPhotogrammetric engineering and remote sensing
Volume84
Issue number11
DOIs
Publication statusPublished - 1 Nov 2018

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Fingerprint

Dive into the research topics of 'Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework'. Together they form a unique fingerprint.

Cite this