How the small object detection via machine learning and uas-based remote-sensing imagery can support the achievement of sdg2: A case study of vole burrows

Haitham Ezzy, Motti Charter, Antonello Bonfante, Anna Brook

Research output: Contribution to journalArticlepeer-review

Abstract

Small mammals, and particularly rodents, are common inhabitants of farmlands, where they play key roles in the ecosystem, but when overabundant, they can be major pests, able to reduce crop production and farmers’ incomes, with tangible effects on the achievement of Sustainable Development Goals no 2 (SDG2, Zero Hunger) of the United Nations. Farmers do not currently have a standardized, accurate method of detecting the presence, abundance, and locations of rodents in their fields, and hence do not have environmentally efficient methods of rodent control able to promote sustainable agriculture oriented to reduce the environmental impacts of cultivation. New developments in unmanned aerial system (UAS) platforms and sensor technology facilitate cost-effective data collection through simultaneous multimodal data collection approaches at very high spatial resolutions in environmental and agricultural contexts. Object detection from remote-sensing images has been an active research topic over the last decade. With recent increases in computational resources and data availability, deep learning-based object detection methods are beginning to play an important role in advancing remote-sensing commercial and scientific applications. However, the performance of current detectors on various UAS-based datasets, including multimodal spatial and physical datasets, remains limited in terms of small object detection. In particular, the ability to quickly detect small objects from a large observed scene (at field scale) is still an open question. In this paper, we compare the efficiencies of applying one-and two-stage detector models to a single UAS-based image and a processed (via Pix4D mapper photogrammetric program) UAS-based orthophoto product to detect rodent burrows, for agriculture/environmental applications as to support farmer activities in the achievements of SDG2. Our results indicate that the use of multimodal data from low-cost UASs within a self-training YOLOv3 model can provide relatively accurate and robust detection for small objects (mAP of 0.86 and an F1-score of 93.39%), and can deliver valuable insights for field management with high spatial precision able to reduce the environmental costs of crop production in the direction of precision agriculture management.

Original languageEnglish
Article number3191
JournalRemote Sensing
Volume13
Issue number16
DOIs
StatePublished - 2 Aug 2021

Bibliographical note

Funding Information:
Funding: This research was funded by the Ministry of Agriculture and Rural Development, State of Israel, grant number 60-02-0003.

Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

Keywords

  • EfficientNet
  • Faster R-CNN
  • RetinaNet
  • Small object detection
  • UAS
  • YOLOv3

ASJC Scopus subject areas

  • Earth and Planetary Sciences (all)

Fingerprint

Dive into the research topics of 'How the small object detection via machine learning and uas-based remote-sensing imagery can support the achievement of sdg2: A case study of vole burrows'. Together they form a unique fingerprint.

Cite this