Soybean yield estimation and lodging discrimination based on lightweight UAV and point cloud deep learning

Longyu Zhou, Dezhi Han, Guangyao Sun, Yaling Liu, Xiaofei Yan, Hongchang Jia, Long Yan, Puyu Feng, Yinghui Li, Lijuan Qiu, Yuntao Ma

Research output: Contribution to journalArticlepeer-review

Abstract

The unmanned aerial vehicle (UAV) platform has emerged as a powerful tool in soybean (Glycine max (L.) Merr.) breeding phenotype research due to its high throughput and adaptability. However, previous studies have predominantly relied on statistical features like vegetation indices and textures, overlooking the crucial structural information embedded in the data. Feature fusion has often been confined to a one-dimensional exponential form, which can decouple spatial and spectral information and neglect their interactions at the data level. In this study, we leverage our team's cross-circling oblique (CCO) route photography and Structure-from-Motion with Multi-View Stereo (SfM-MVS) techniques to reconstruct the three-dimensional (3D) structure of soybean canopies. Newly point cloud deep learning models SoyNet and SoyNet-Res were further created with two novel data-level fusion that integrate spatial structure and color information. Our results reveal that incorporating RGB color and vegetation index (VI) spectral information with spatial structure information, leads to a significant reduction in root mean square error (RMSE) for yield estimation (22.55 ​kg ​ha−1) and an improvement in F1-score for five-class lodging discrimination (0.06) at S7 growth stage. The SoyNet-Res model employing multi-task learning exhibits better accuracy in both yield estimation (RMSE: 349.45 ​kg ​ha−1) when compared to the H2O-AutoML. Furthermore, our findings indicate that multi-task deep learning outperforms single-task learning in lodging discrimination, achieving an accuracy top-2 of 0.87 and accuracy top-3 of 0.97 for five-class. In conclusion, the point cloud deep learning method exhibits tremendous potential in learning multi-phenotype tasks, laying the foundation for optimizing soybean breeding programs.

Original languageEnglish
Article number100028
JournalPlant Phenomics
Volume7
Issue number2
DOIs
StatePublished - Jun 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2025

Keywords

  • 3D reconstruction
  • Digital image
  • Multi-task learning
  • Point cloud
  • Remote sensing

ASJC Scopus subject areas

  • Agronomy and Crop Science

Fingerprint

Dive into the research topics of 'Soybean yield estimation and lodging discrimination based on lightweight UAV and point cloud deep learning'. Together they form a unique fingerprint.

Cite this