A comparison of point and bounding box annotation methods to detect wild animals using remote sensing and deep learning

Zeyu Xu, Tiejun Wang, A.K. Skidmore, Richard Hugh Lamprey

Research output: Contribution to conferenceAbstractAcademic

Abstract

The point and bounding box are the two widely used annotation techniques for deep learning-based wild animal detection using remote sensing. However, the impact of these two annotation methods on the performance of deep learning is still unknown. Here, using a publicly available Aerial Elephant Dataset, we evaluate the effect of two annotation methods on model accuracy in two commonly used neural networks (YOLO and U-Net). The results show that when using YOLO, there are no statistically significant differences between the point and bounding box-based annotation methods, as indicated by an overall F1-score being 82.7% and 82.8% (df = 4, P = 0.683, t-test), respectively. While when using U-Net, the accuracy of the results based on bounding boxes with an overall F1-score of 82.7% is significantly higher than that of the point-based annotation method with an overall F1-score of 80.0% (df = 4, P < 0.001, t-test). Our study demonstrates that the effectiveness of the two annotation methods is influenced by the choice of deep learning models. The result suggests that the deep learning method should be taken into account when deciding on annotation techniques for animal detection in remote sensing images.
Original languageEnglish
DOIs
Publication statusPublished - 11 Mar 2024
EventEGU General Assembly 2024 - Vienna, Austria
Duration: 14 Apr 202419 Apr 2024

Conference

ConferenceEGU General Assembly 2024
Country/TerritoryAustria
CityVienna
Period14/04/2419/04/24

Fingerprint

Dive into the research topics of 'A comparison of point and bounding box annotation methods to detect wild animals using remote sensing and deep learning'. Together they form a unique fingerprint.

Cite this