Abstract
Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN.
Original language | English |
---|---|
Pages (from-to) | 723-731 |
Number of pages | 9 |
Journal | Photogrammetric engineering and remote sensing |
Volume | 84 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Nov 2018 |
Keywords
- ITC-ISI-JOURNAL-ARTICLE