Abstract
Robotic advances and developments in sensors and acquisition systems facilitate the collection of survey data in remote and challenging scenarios. Semantic segmentation, which attempts to provide per-pixel semantic labels, is an essential task when processing such data. Recent advances in deep learning approaches have boosted this task's performance. Unfortunately, these methods need large amounts of labeled data, which is usually a challenge in many domains. In many environmental monitoring instances, such as the coral reef example studied here, data labeling demands expert knowledge and is costly. Therefore, many data sets often present scarce and sparse image annotations or remain untouched in image libraries. This study proposes and validates an effective approach for learning semantic segmentation models from sparsely labeled data. Based on augmenting sparse annotations with the proposed adaptive superpixel segmentation propagation, we obtain similar results as if training with dense annotations, significantly reducing the labeling effort. We perform an in-depth analysis of our labeling augmentation method as well as of different neural network architectures and loss functions for semantic segmentation. We demonstrate the effectiveness of our approach on publicly available data sets of different real domains, with the emphasis on underwater scenarios—specifically, coral reef semantic segmentation. We release new labeled data as well as an encoder trained on half a million coral reef images, which is shown to facilitate the generalization to new coral scenarios.
Original language | English |
---|---|
Pages (from-to) | 1456-1477 |
Number of pages | 22 |
Journal | Journal of Field Robotics |
Volume | 36 |
Issue number | 8 |
DOIs | |
State | Published - 1 Dec 2019 |
Bibliographical note
Funding Information:The authors would like to thank NVIDIA Corporation for the donation of the Titan Xp GPUs used in this study. We thank Aviad Avni for fieldwork assistance and the Interuniversity Institute for Marine Sciences in Eilat for making their facilities available to us. This project was partially funded by the Spanish Government project PGC2018-098817-A-I00, Aragón Regional Government (DGA T45_17R/FSE), and the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skolodowska-Curie grant agreement no. 796025 to G. E; T. T. was supported by the Israel Ministry of National Infrastructures, Energy, and Water Resources (Grant 218-17-008) and Israel Science Foundation (Grant 680/18); M. Y. was supported by the PADI Foundation application no. 32618 and the Murray Foundation for student research.
Funding Information:
The authors would like to thank NVIDIA Corporation for the donation of the Titan Xp GPUs used in this study. We thank Aviad Avni for fieldwork assistance and the Interuniversity Institute for Marine Sciences in Eilat for making their facilities available to us. This project was partially funded by the Spanish Government project PGC2018‐098817‐A‐I00, Aragón Regional Government (DGA T45_17R/FSE), and the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skolodowska‐Curie grant agreement no. 796025 to G. E; T. T. was supported by the Israel Ministry of National Infrastructures, Energy, and Water Resources (Grant 218‐17‐008) and Israel Science Foundation (Grant 680/18); M. Y. was supported by the PADI Foundation application no. 32618 and the Murray Foundation for student research.
Publisher Copyright:
© 2019 Wiley Periodicals, Inc.
Keywords
- coral reefs
- learning
- machine learning
- perception
- underwater robotics
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications