Abstract
Semantic segmentation is paramount for autonomous vehicles to have a deeper understanding of the surrounding traffic environment and enhance safety. Deep neural networks (DNN) have achieved remarkable performances in semantic segmentation. However, training such a DNN requires a large amount of labelled data at pixel level. In practice, it is a labour-intensive task to manually annotate dense pixel-level labels. To tackle the problem associated with a small amount of labelled data, Deep Domain Adaptation (DDA) methods have recently been developed to examine the use of synthetic driving scenes so as to significantly reduce the manual annotation cost. Despite remarkable advances, these methods unfortunately suffer from the generalisability problem that fails to provide a holistic representation of the mapping from the source image domain to the target image domain. In this paper, we therefore develop a novel ensembled DDA to train models with different up sampling strategies, discrepancy and segmentation loss functions. The models are, therefore, complementary with each other to achieve better generalisation in the target image domain. Such a design does not only improve the adapted semantic segmentation performance, but also strengthen the model reliability and robustness. Extensive experimental results demonstrate the superiorities of our approach over several state-of-the-art methods.
Original language | English |
---|---|
Pages (from-to) | 1496-1506 |
Number of pages | 11 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Volume | 14 |
Issue number | 4 |
Early online date | 5 Oct 2021 |
DOIs | |
Publication status | Published - 1 Dec 2022 |
Keywords
- Adaptation models
- Autonomous vehicles
- Deep learning
- Domain adaptation
- Feature extraction
- Generative adversarial network
- Generative adversarial networks
- Image processing
- Image segmentation
- Integrated circuits
- Semantic segmentation
- Semantics
- Training data